Test Report: Docker_Linux_containerd 22000

                    
                      3f3a61283993ee602bd323c44b704727ac3a4ece:2025-11-29:42558
                    
                

Test fail (4/333)

Order failed test Duration
303 TestStartStop/group/old-k8s-version/serial/DeployApp 13.54
304 TestStartStop/group/no-preload/serial/DeployApp 14.48
327 TestStartStop/group/embed-certs/serial/DeployApp 14.5
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 15.42
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-295154 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [54baf2f4-8de5-4f66-92ac-f5315174d940] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [54baf2f4-8de5-4f66-92ac-f5315174d940] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003343341s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-295154 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-295154
helpers_test.go:243: (dbg) docker inspect old-k8s-version-295154:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e",
	        "Created": "2025-11-29T09:01:32.670265754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 494787,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:01:32.709136408Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e/hostname",
	        "HostsPath": "/var/lib/docker/containers/1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e/hosts",
	        "LogPath": "/var/lib/docker/containers/1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e/1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e-json.log",
	        "Name": "/old-k8s-version-295154",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-295154:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-295154",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e",
	                "LowerDir": "/var/lib/docker/overlay2/10e010eea53c4090a92173793351457113c92b95e4addfb0007c310be02782d4-init/diff:/var/lib/docker/overlay2/eb180691bce18b8d981b2d61ed0962851c615364ed77c18ff66d559424569005/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10e010eea53c4090a92173793351457113c92b95e4addfb0007c310be02782d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10e010eea53c4090a92173793351457113c92b95e4addfb0007c310be02782d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10e010eea53c4090a92173793351457113c92b95e4addfb0007c310be02782d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-295154",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-295154/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-295154",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-295154",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-295154",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d61dde634f57a1405987eb1bcb1468d94550e880fe30f55b1f686d12c8c280ee",
	            "SandboxKey": "/var/run/docker/netns/d61dde634f57",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-295154": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aea341d97cf5d4f6668e24ade3efa38cebbca9060f995994226a6ded161b076c",
	                    "EndpointID": "7f306b5e076751e147ce07bdf687dd5284be41e6bffcdf4542e80d7a90deb9e2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "e6:d5:92:ca:f6:04",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-295154",
	                        "1d2dc93defe0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-295154 -n old-k8s-version-295154
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-295154 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-295154 logs -n 25: (1.145289555s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-770004 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo containerd config dump                                                                                                                                                                                                        │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo crio config                                                                                                                                                                                                                   │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ delete  │ -p cilium-770004                                                                                                                                                                                                                                    │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │ 29 Nov 25 09:00 UTC │
	│ start   │ -p force-systemd-env-693869 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-693869 │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │ 29 Nov 25 09:01 UTC │
	│ start   │ -p pause-563162 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │ 29 Nov 25 09:01 UTC │
	│ ssh     │ force-systemd-env-693869 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-693869 │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p force-systemd-env-693869                                                                                                                                                                                                                         │ force-systemd-env-693869 │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ start   │ -p cert-options-536258 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ pause   │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ unpause │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ pause   │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ ssh     │ cert-options-536258 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ ssh     │ -p cert-options-536258 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p cert-options-536258                                                                                                                                                                                                                              │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p pause-563162                                                                                                                                                                                                                                     │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ start   │ -p old-k8s-version-295154 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-295154   │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:02 UTC │
	│ start   │ -p no-preload-924441 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-924441        │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:02 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:01:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:01:26.371812  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:26.372231  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:26.372304  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:26.372374  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:26.406988  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:26.407016  460401 cri.go:89] found id: "40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	I1129 09:01:26.407022  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:26.407027  460401 cri.go:89] found id: ""
	I1129 09:01:26.407038  460401 logs.go:282] 3 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:26.407111  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.413707  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.419492  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.424920  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:26.424999  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:26.456369  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:26.456395  460401 cri.go:89] found id: ""
	I1129 09:01:26.456406  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:26.456466  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.462064  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:26.462133  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:26.492837  460401 cri.go:89] found id: ""
	I1129 09:01:26.492868  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.492879  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:26.492887  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:26.492955  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:26.521715  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:26.521747  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:26.521754  460401 cri.go:89] found id: ""
	I1129 09:01:26.521763  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:26.521821  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.526872  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.531295  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:26.531353  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:26.558218  460401 cri.go:89] found id: ""
	I1129 09:01:26.558248  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.558257  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:26.558264  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:26.558313  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:26.587221  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:26.587246  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:26.587253  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:26.587258  460401 cri.go:89] found id: ""
	I1129 09:01:26.587268  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:26.587328  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.591954  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.596055  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.600163  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:26.600219  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:26.628586  460401 cri.go:89] found id: ""
	I1129 09:01:26.628613  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.628624  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:26.628633  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:26.628690  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:26.657553  460401 cri.go:89] found id: ""
	I1129 09:01:26.657581  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.657591  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:26.657603  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:26.657622  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:26.721559  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:26.721584  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:26.721601  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:26.756136  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:26.756165  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:26.787789  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:26.787827  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:26.838908  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:26.838943  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:26.875689  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:26.875723  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:26.946907  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:26.946941  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:26.982883  460401 logs.go:123] Gathering logs for kube-apiserver [40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac] ...
	I1129 09:01:26.982919  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	W1129 09:01:27.012923  460401 logs.go:130] failed kube-apiserver [40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac": Process exited with status 1
	stdout:
	
	stderr:
	E1129 09:01:27.010611    2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found" containerID="40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	time="2025-11-29T09:01:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found"
	 output: 
	** stderr ** 
	E1129 09:01:27.010611    2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found" containerID="40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	time="2025-11-29T09:01:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found"
	
	** /stderr **
	I1129 09:01:27.012941  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:27.012953  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:27.051493  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:27.051526  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:27.089722  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:27.089755  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:27.138471  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:27.138504  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:27.172932  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:27.172962  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:27.207844  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:27.207878  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:29.500031  494126 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:01:29.500142  494126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:01:29.500153  494126 out.go:374] Setting ErrFile to fd 2...
	I1129 09:01:29.500159  494126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:01:29.500372  494126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 09:01:29.500882  494126 out.go:368] Setting JSON to false
	I1129 09:01:29.501996  494126 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6233,"bootTime":1764400656,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:01:29.502070  494126 start.go:143] virtualization: kvm guest
	I1129 09:01:29.506976  494126 out.go:179] * [no-preload-924441] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:01:29.508162  494126 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:01:29.508182  494126 notify.go:221] Checking for updates...
	I1129 09:01:29.510318  494126 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:01:29.511334  494126 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:01:29.516252  494126 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 09:01:29.517321  494126 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:01:29.518374  494126 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:01:29.519877  494126 config.go:182] Loaded profile config "cert-expiration-368536": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:01:29.519989  494126 config.go:182] Loaded profile config "kubernetes-upgrade-806701": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:01:29.520095  494126 config.go:182] Loaded profile config "old-k8s-version-295154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:01:29.520225  494126 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:01:29.546023  494126 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:01:29.546141  494126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:01:29.607775  494126 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:81 SystemTime:2025-11-29 09:01:29.596891851 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:01:29.607908  494126 docker.go:319] overlay module found
	I1129 09:01:29.610288  494126 out.go:179] * Using the docker driver based on user configuration
	I1129 09:01:29.611200  494126 start.go:309] selected driver: docker
	I1129 09:01:29.611220  494126 start.go:927] validating driver "docker" against <nil>
	I1129 09:01:29.611231  494126 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:01:29.611850  494126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:01:29.673266  494126 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:81 SystemTime:2025-11-29 09:01:29.662655452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:01:29.673484  494126 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:01:29.673822  494126 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:01:29.675454  494126 out.go:179] * Using Docker driver with root privileges
	I1129 09:01:29.679127  494126 cni.go:84] Creating CNI manager for ""
	I1129 09:01:29.679243  494126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:29.679264  494126 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:01:29.679351  494126 start.go:353] cluster config:
	{Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:01:29.680591  494126 out.go:179] * Starting "no-preload-924441" primary control-plane node in "no-preload-924441" cluster
	I1129 09:01:29.681517  494126 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:01:29.682533  494126 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:01:29.683845  494126 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:01:29.683975  494126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/config.json ...
	I1129 09:01:29.683971  494126 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:01:29.684042  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/config.json: {Name:mk4df9140f26fdbfe5b2addb71b44607d26b26a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:29.684181  494126 cache.go:107] acquiring lock: {Name:mka90f7eac55a6e5d6d9651fc108f327509b562f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684233  494126 cache.go:107] acquiring lock: {Name:mk2c250a4202b546a18f0cc7664314439a4ec834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684259  494126 cache.go:107] acquiring lock: {Name:mk976aaa4e01b0c9e83cc6925b8c3c72804bfa25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684288  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 09:01:29.684299  494126 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 144.373µs
	I1129 09:01:29.684315  494126 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 09:01:29.684321  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 09:01:29.684322  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 09:01:29.684332  494126 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 80.37µs
	I1129 09:01:29.684333  494126 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 119.913µs
	I1129 09:01:29.684341  494126 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 09:01:29.684344  494126 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 09:01:29.684332  494126 cache.go:107] acquiring lock: {Name:mkff44f5b6b961ddaa9acc3e74cf0480b0d2f776 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684358  494126 cache.go:107] acquiring lock: {Name:mk6080f4393a19fb5c4d6f436dce1a2bb1688f86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684378  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 09:01:29.684387  494126 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 58.113µs
	I1129 09:01:29.684395  494126 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 09:01:29.684399  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1129 09:01:29.684282  494126 cache.go:107] acquiring lock: {Name:mkb8e7a67c98a0b8caa208116d415323f5ca7ccc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684410  494126 cache.go:107] acquiring lock: {Name:mk47ee24ca074cb6cc1a641d737215686b099dc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684472  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 09:01:29.684482  494126 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 217.393µs
	I1129 09:01:29.684492  494126 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 09:01:29.684416  494126 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 61.464µs
	I1129 09:01:29.684504  494126 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 09:01:29.684517  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 09:01:29.684533  494126 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 171.692µs
	I1129 09:01:29.684552  494126 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 09:01:29.684643  494126 cache.go:107] acquiring lock: {Name:mk912246de843459c104f342794e23ecb1fc7a75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684790  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 09:01:29.684806  494126 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 226.111µs
	I1129 09:01:29.684824  494126 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 09:01:29.684840  494126 cache.go:87] Successfully saved all images to host disk.
	I1129 09:01:29.706829  494126 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:01:29.706854  494126 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:01:29.706878  494126 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:01:29.706918  494126 start.go:360] acquireMachinesLock for no-preload-924441: {Name:mkf9f3b6b30f178cf9b9d50a2dabce8e2c5d48f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.707056  494126 start.go:364] duration metric: took 99.455µs to acquireMachinesLock for "no-preload-924441"
	I1129 09:01:29.707090  494126 start.go:93] Provisioning new machine with config: &{Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:01:29.707206  494126 start.go:125] createHost starting for "" (driver="docker")
	I1129 09:01:28.461537  493486 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:01:28.461867  493486 start.go:159] libmachine.API.Create for "old-k8s-version-295154" (driver="docker")
	I1129 09:01:28.461917  493486 client.go:173] LocalClient.Create starting
	I1129 09:01:28.462009  493486 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem
	I1129 09:01:28.462065  493486 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:28.462089  493486 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:28.462160  493486 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem
	I1129 09:01:28.462186  493486 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:28.462205  493486 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:28.462679  493486 cli_runner.go:164] Run: docker network inspect old-k8s-version-295154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:01:28.481658  493486 cli_runner.go:211] docker network inspect old-k8s-version-295154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:01:28.481745  493486 network_create.go:284] running [docker network inspect old-k8s-version-295154] to gather additional debugging logs...
	I1129 09:01:28.481770  493486 cli_runner.go:164] Run: docker network inspect old-k8s-version-295154
	W1129 09:01:28.500619  493486 cli_runner.go:211] docker network inspect old-k8s-version-295154 returned with exit code 1
	I1129 09:01:28.500661  493486 network_create.go:287] error running [docker network inspect old-k8s-version-295154]: docker network inspect old-k8s-version-295154: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-295154 not found
	I1129 09:01:28.500677  493486 network_create.go:289] output of [docker network inspect old-k8s-version-295154]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-295154 not found
	
	** /stderr **
	I1129 09:01:28.500849  493486 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:28.518426  493486 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f69c672bf913 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:40:f4:ed:4f:ab} reservation:<nil>}
	I1129 09:01:28.519384  493486 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-96d20aff5877 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:01:e2:a3:b8:33} reservation:<nil>}
	I1129 09:01:28.520407  493486 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7906c56f869 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:29:75:e3:e0:7f} reservation:<nil>}
	I1129 09:01:28.521974  493486 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f90700}
	I1129 09:01:28.522028  493486 network_create.go:124] attempt to create docker network old-k8s-version-295154 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 09:01:28.522109  493486 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-295154 old-k8s-version-295154
	I1129 09:01:28.575478  493486 network_create.go:108] docker network old-k8s-version-295154 192.168.76.0/24 created
	I1129 09:01:28.575522  493486 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-295154" container
	I1129 09:01:28.575603  493486 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:01:28.593666  493486 cli_runner.go:164] Run: docker volume create old-k8s-version-295154 --label name.minikube.sigs.k8s.io=old-k8s-version-295154 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:01:28.612389  493486 oci.go:103] Successfully created a docker volume old-k8s-version-295154
	I1129 09:01:28.612501  493486 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-295154-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-295154 --entrypoint /usr/bin/test -v old-k8s-version-295154:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:01:29.238109  493486 oci.go:107] Successfully prepared a docker volume old-k8s-version-295154
	I1129 09:01:29.238162  493486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 09:01:29.238176  493486 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:01:29.238241  493486 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-295154:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 09:01:32.586626  493486 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-295154:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.348341473s)
	I1129 09:01:32.586660  493486 kic.go:203] duration metric: took 3.348481997s to extract preloaded images to volume ...
	W1129 09:01:32.586761  493486 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:01:32.586805  493486 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:01:32.586861  493486 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:01:32.650922  493486 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-295154 --name old-k8s-version-295154 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-295154 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-295154 --network old-k8s-version-295154 --ip 192.168.76.2 --volume old-k8s-version-295154:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:01:32.982372  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Running}}
	I1129 09:01:33.001073  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:33.021021  493486 cli_runner.go:164] Run: docker exec old-k8s-version-295154 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:01:33.078706  493486 oci.go:144] the created container "old-k8s-version-295154" has a running status.
	I1129 09:01:33.078890  493486 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa...
	I1129 09:01:33.213970  493486 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:01:33.251103  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:29.709142  494126 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:01:29.709367  494126 start.go:159] libmachine.API.Create for "no-preload-924441" (driver="docker")
	I1129 09:01:29.709398  494126 client.go:173] LocalClient.Create starting
	I1129 09:01:29.709475  494126 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem
	I1129 09:01:29.709526  494126 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:29.709553  494126 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:29.709629  494126 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem
	I1129 09:01:29.709661  494126 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:29.709679  494126 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:29.710082  494126 cli_runner.go:164] Run: docker network inspect no-preload-924441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:01:29.727862  494126 cli_runner.go:211] docker network inspect no-preload-924441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:01:29.727982  494126 network_create.go:284] running [docker network inspect no-preload-924441] to gather additional debugging logs...
	I1129 09:01:29.728011  494126 cli_runner.go:164] Run: docker network inspect no-preload-924441
	W1129 09:01:29.747053  494126 cli_runner.go:211] docker network inspect no-preload-924441 returned with exit code 1
	I1129 09:01:29.747092  494126 network_create.go:287] error running [docker network inspect no-preload-924441]: docker network inspect no-preload-924441: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-924441 not found
	I1129 09:01:29.747129  494126 network_create.go:289] output of [docker network inspect no-preload-924441]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-924441 not found
	
	** /stderr **
	I1129 09:01:29.747297  494126 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:29.769138  494126 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f69c672bf913 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:40:f4:ed:4f:ab} reservation:<nil>}
	I1129 09:01:29.769961  494126 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-96d20aff5877 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:01:e2:a3:b8:33} reservation:<nil>}
	I1129 09:01:29.770795  494126 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7906c56f869 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:29:75:e3:e0:7f} reservation:<nil>}
	I1129 09:01:29.771440  494126 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-aea341d97cf5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:fb:22:ff:e0:65} reservation:<nil>}
	I1129 09:01:29.771972  494126 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5ec7c7346e1b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:a5:df:dd:c8:cf} reservation:<nil>}
	I1129 09:01:29.772536  494126 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-ede9a8c5c6b0 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:6e:06:75:02:7a} reservation:<nil>}
	I1129 09:01:29.773382  494126 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00201aa40}
	I1129 09:01:29.773412  494126 network_create.go:124] attempt to create docker network no-preload-924441 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1129 09:01:29.773492  494126 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-924441 no-preload-924441
	I1129 09:01:29.826699  494126 network_create.go:108] docker network no-preload-924441 192.168.103.0/24 created
	I1129 09:01:29.826822  494126 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-924441" container
	I1129 09:01:29.826907  494126 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:01:29.848520  494126 cli_runner.go:164] Run: docker volume create no-preload-924441 --label name.minikube.sigs.k8s.io=no-preload-924441 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:01:29.870388  494126 oci.go:103] Successfully created a docker volume no-preload-924441
	I1129 09:01:29.870496  494126 cli_runner.go:164] Run: docker run --rm --name no-preload-924441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-924441 --entrypoint /usr/bin/test -v no-preload-924441:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:01:32.848045  494126 cli_runner.go:217] Completed: docker run --rm --name no-preload-924441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-924441 --entrypoint /usr/bin/test -v no-preload-924441:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.977502795s)
	I1129 09:01:32.848077  494126 oci.go:107] Successfully prepared a docker volume no-preload-924441
	I1129 09:01:32.848131  494126 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1129 09:01:32.848227  494126 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:01:32.848271  494126 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:01:32.848312  494126 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:01:32.909124  494126 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-924441 --name no-preload-924441 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-924441 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-924441 --network no-preload-924441 --ip 192.168.103.2 --volume no-preload-924441:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:01:33.229639  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Running}}
	I1129 09:01:33.257967  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:01:33.283525  494126 cli_runner.go:164] Run: docker exec no-preload-924441 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:01:33.358911  494126 oci.go:144] the created container "no-preload-924441" has a running status.
	I1129 09:01:33.358964  494126 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa...
	I1129 09:01:33.456248  494126 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:01:33.491041  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:01:33.515555  494126 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:01:33.515581  494126 kic_runner.go:114] Args: [docker exec --privileged no-preload-924441 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:01:33.567971  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:01:33.599907  494126 machine.go:94] provisionDockerMachine start ...
	I1129 09:01:33.599999  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:33.634873  494126 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.635521  494126 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:01:33.635590  494126 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:01:33.636667  494126 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34766->127.0.0.1:33063: read: connection reset by peer
	I1129 09:01:29.724136  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:29.724608  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:29.724657  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:29.724702  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:29.763194  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:29.763266  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:29.763286  460401 cri.go:89] found id: ""
	I1129 09:01:29.763304  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:29.763372  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.769877  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.774814  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:29.774887  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:29.810078  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:29.810105  460401 cri.go:89] found id: ""
	I1129 09:01:29.810116  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:29.810167  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.815272  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:29.815349  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:29.851653  460401 cri.go:89] found id: ""
	I1129 09:01:29.851680  460401 logs.go:282] 0 containers: []
	W1129 09:01:29.851691  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:29.851700  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:29.851773  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:29.883424  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:29.883449  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:29.883456  460401 cri.go:89] found id: ""
	I1129 09:01:29.883466  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:29.883537  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.889105  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.894072  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:29.894150  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:29.924971  460401 cri.go:89] found id: ""
	I1129 09:01:29.925006  460401 logs.go:282] 0 containers: []
	W1129 09:01:29.925019  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:29.925027  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:29.925129  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:29.954168  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:29.954194  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:29.954199  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:29.954203  460401 cri.go:89] found id: ""
	I1129 09:01:29.954214  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:29.954278  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.959542  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.964240  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.968754  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:29.968820  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:29.999663  460401 cri.go:89] found id: ""
	I1129 09:01:29.999685  460401 logs.go:282] 0 containers: []
	W1129 09:01:29.999694  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:29.999700  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:29.999780  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:30.029803  460401 cri.go:89] found id: ""
	I1129 09:01:30.029833  460401 logs.go:282] 0 containers: []
	W1129 09:01:30.029845  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:30.029859  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:30.029877  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:30.069873  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:30.069904  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:30.108923  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:30.108958  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:30.146649  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:30.146682  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:30.190480  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:30.190514  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:30.225134  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:30.225167  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:30.299416  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:30.299461  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:30.314711  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:30.314766  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:30.384833  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:30.384856  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:30.384879  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:30.420690  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:30.420720  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:30.476182  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:30.476221  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:30.507666  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:30.507698  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:30.536613  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:30.536640  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:33.076844  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:33.077304  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:33.077371  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:33.077426  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:33.111899  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:33.111922  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:33.111928  460401 cri.go:89] found id: ""
	I1129 09:01:33.111938  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:33.111995  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.117191  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.122615  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:33.122688  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:33.163794  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:33.163822  460401 cri.go:89] found id: ""
	I1129 09:01:33.163834  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:33.163897  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.170244  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:33.170334  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:33.203629  460401 cri.go:89] found id: ""
	I1129 09:01:33.203662  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.203675  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:33.203683  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:33.203759  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:33.248112  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:33.248142  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:33.248148  460401 cri.go:89] found id: ""
	I1129 09:01:33.248159  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:33.248226  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.255192  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.262339  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:33.262419  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:33.308727  460401 cri.go:89] found id: ""
	I1129 09:01:33.308855  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.308869  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:33.308878  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:33.309309  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:33.361181  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:33.361234  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:33.361241  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:33.361245  460401 cri.go:89] found id: ""
	I1129 09:01:33.361255  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:33.361343  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.368091  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.374495  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.380899  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:33.380965  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:33.430643  460401 cri.go:89] found id: ""
	I1129 09:01:33.430670  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.430681  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:33.430689  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:33.430771  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:33.467019  460401 cri.go:89] found id: ""
	I1129 09:01:33.467047  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.467058  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:33.467072  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:33.467091  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:33.529538  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:33.529588  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:33.591866  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:33.591912  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:33.664144  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:33.664179  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:33.701152  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:33.701195  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:33.735624  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:33.735669  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:33.774144  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:33.774175  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:33.808426  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:33.808461  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:33.898471  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:33.898509  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:33.914358  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:33.914394  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:33.978927  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:33.978954  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:33.978975  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:34.016239  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:34.016268  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:34.055208  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:34.055239  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:33.275806  493486 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:01:33.275832  493486 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-295154 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:01:33.349350  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:33.378383  493486 machine.go:94] provisionDockerMachine start ...
	I1129 09:01:33.378475  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:33.410015  493486 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.410367  493486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:01:33.410384  493486 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:01:33.577990  493486 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-295154
	
	I1129 09:01:33.578018  493486 ubuntu.go:182] provisioning hostname "old-k8s-version-295154"
	I1129 09:01:33.578086  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:33.609401  493486 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.609890  493486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:01:33.609953  493486 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-295154 && echo "old-k8s-version-295154" | sudo tee /etc/hostname
	I1129 09:01:33.789112  493486 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-295154
	
	I1129 09:01:33.789205  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:33.813423  493486 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.813741  493486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:01:33.813774  493486 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-295154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-295154/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-295154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:01:33.966671  493486 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:01:33.966701  493486 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-255825/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-255825/.minikube}
	I1129 09:01:33.966720  493486 ubuntu.go:190] setting up certificates
	I1129 09:01:33.966746  493486 provision.go:84] configureAuth start
	I1129 09:01:33.966809  493486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-295154
	I1129 09:01:33.987509  493486 provision.go:143] copyHostCerts
	I1129 09:01:33.987591  493486 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem, removing ...
	I1129 09:01:33.987609  493486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem
	I1129 09:01:33.987703  493486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem (1078 bytes)
	I1129 09:01:33.987854  493486 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem, removing ...
	I1129 09:01:33.987873  493486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem
	I1129 09:01:33.987926  493486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem (1123 bytes)
	I1129 09:01:33.988030  493486 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem, removing ...
	I1129 09:01:33.988043  493486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem
	I1129 09:01:33.988093  493486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem (1679 bytes)
	I1129 09:01:33.988197  493486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-295154 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-295154]
	I1129 09:01:34.173289  493486 provision.go:177] copyRemoteCerts
	I1129 09:01:34.173365  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:01:34.173409  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.192053  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.294293  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:01:34.313898  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1129 09:01:34.331337  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:01:34.348272  493486 provision.go:87] duration metric: took 381.510752ms to configureAuth
	I1129 09:01:34.348301  493486 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:01:34.348457  493486 config.go:182] Loaded profile config "old-k8s-version-295154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:01:34.348472  493486 machine.go:97] duration metric: took 970.068662ms to provisionDockerMachine
	I1129 09:01:34.348481  493486 client.go:176] duration metric: took 5.886553133s to LocalClient.Create
	I1129 09:01:34.348502  493486 start.go:167] duration metric: took 5.88663904s to libmachine.API.Create "old-k8s-version-295154"
	I1129 09:01:34.348512  493486 start.go:293] postStartSetup for "old-k8s-version-295154" (driver="docker")
	I1129 09:01:34.348520  493486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:01:34.348570  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:01:34.348614  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.366501  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.469910  493486 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:01:34.473823  493486 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:01:34.473855  493486 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:01:34.473868  493486 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/addons for local assets ...
	I1129 09:01:34.473922  493486 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/files for local assets ...
	I1129 09:01:34.474038  493486 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem -> 2594832.pem in /etc/ssl/certs
	I1129 09:01:34.474177  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:01:34.481912  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:34.502433  493486 start.go:296] duration metric: took 153.905912ms for postStartSetup
	I1129 09:01:34.502813  493486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-295154
	I1129 09:01:34.520071  493486 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/config.json ...
	I1129 09:01:34.520308  493486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:01:34.520347  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.539111  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.640199  493486 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:01:34.644901  493486 start.go:128] duration metric: took 6.185289215s to createHost
	I1129 09:01:34.644928  493486 start.go:83] releasing machines lock for "old-k8s-version-295154", held for 6.185484113s
	I1129 09:01:34.644991  493486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-295154
	I1129 09:01:34.662525  493486 ssh_runner.go:195] Run: cat /version.json
	I1129 09:01:34.662583  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.662584  493486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:01:34.662648  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.679837  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.681115  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.833568  493486 ssh_runner.go:195] Run: systemctl --version
	I1129 09:01:34.840355  493486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:01:34.844844  493486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:01:34.844907  493486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:01:34.869137  493486 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:01:34.869161  493486 start.go:496] detecting cgroup driver to use...
	I1129 09:01:34.869194  493486 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:01:34.869251  493486 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:01:34.883461  493486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:01:34.895885  493486 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:01:34.895942  493486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:01:34.912002  493486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:01:34.929350  493486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:01:35.015369  493486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:01:35.101537  493486 docker.go:234] disabling docker service ...
	I1129 09:01:35.101597  493486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:01:35.120759  493486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:01:35.133226  493486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:01:35.217122  493486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:01:35.301702  493486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:01:35.314440  493486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:01:35.328312  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1129 09:01:35.338331  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:01:35.346975  493486 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1129 09:01:35.347033  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1129 09:01:35.355511  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:35.363986  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:01:35.372342  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:35.380589  493486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:01:35.388205  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:01:35.396344  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:01:35.404459  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:01:35.412783  493486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:01:35.420177  493486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:01:35.427378  493486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:35.508150  493486 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:01:35.605801  493486 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:01:35.605868  493486 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:01:35.610095  493486 start.go:564] Will wait 60s for crictl version
	I1129 09:01:35.610140  493486 ssh_runner.go:195] Run: which crictl
	I1129 09:01:35.613826  493486 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:01:35.640869  493486 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:01:35.640947  493486 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:35.662573  493486 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:35.686990  493486 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1129 09:01:35.688126  493486 cli_runner.go:164] Run: docker network inspect old-k8s-version-295154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:35.705269  493486 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:01:35.709565  493486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:35.720029  493486 kubeadm.go:884] updating cluster {Name:old-k8s-version-295154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-295154 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:01:35.720146  493486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 09:01:35.720192  493486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:01:35.745337  493486 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:01:35.745359  493486 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:01:35.745433  493486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:01:35.768552  493486 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:01:35.768573  493486 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:01:35.768582  493486 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1129 09:01:35.768708  493486 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-295154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-295154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:01:35.768800  493486 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:01:35.793684  493486 cni.go:84] Creating CNI manager for ""
	I1129 09:01:35.793704  493486 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:35.793722  493486 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:01:35.793760  493486 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-295154 NodeName:old-k8s-version-295154 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:01:35.793881  493486 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-295154"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:01:35.793941  493486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1129 09:01:35.801702  493486 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:01:35.801779  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:01:35.809370  493486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1129 09:01:35.821645  493486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:01:35.837123  493486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1129 09:01:35.849282  493486 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:01:35.852777  493486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:35.862291  493486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:35.945522  493486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:01:35.967020  493486 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154 for IP: 192.168.76.2
	I1129 09:01:35.967046  493486 certs.go:195] generating shared ca certs ...
	I1129 09:01:35.967066  493486 certs.go:227] acquiring lock for ca certs: {Name:mk5e6bcae0a6944966b241f3c6197a472703c991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:35.967208  493486 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key
	I1129 09:01:35.967259  493486 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key
	I1129 09:01:35.967269  493486 certs.go:257] generating profile certs ...
	I1129 09:01:35.967334  493486 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.key
	I1129 09:01:35.967347  493486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt with IP's: []
	I1129 09:01:36.097254  493486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt ...
	I1129 09:01:36.097290  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: {Name:mk21cfae97f1407d02cd99fe2a74be759b699397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.097496  493486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.key ...
	I1129 09:01:36.097514  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.key: {Name:mk0736bb845004e9c4d4a2d8602930ec0568eec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.097631  493486 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72
	I1129 09:01:36.097693  493486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1129 09:01:36.144552  493486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72 ...
	I1129 09:01:36.144579  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72: {Name:mk3fedcec97acb487835213600ee8b696c362f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.144774  493486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72 ...
	I1129 09:01:36.144793  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72: {Name:mk9dc52d2daf1391895a4ee3c561f559be0e2755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.144904  493486 certs.go:382] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72 -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt
	I1129 09:01:36.145012  493486 certs.go:386] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72 -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key
	I1129 09:01:36.145117  493486 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key
	I1129 09:01:36.145138  493486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt with IP's: []
	I1129 09:01:36.307914  493486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt ...
	I1129 09:01:36.307946  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt: {Name:mk698ad1b9e2e29d385fd97b123d5b48273c6d5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.308144  493486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key ...
	I1129 09:01:36.308172  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key: {Name:mkcfd3db96260b6b8677060f32dcbd4dd8f838bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.308432  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem (1338 bytes)
	W1129 09:01:36.308490  493486 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483_empty.pem, impossibly tiny 0 bytes
	I1129 09:01:36.308506  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:01:36.308543  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:01:36.308590  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:01:36.308633  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem (1679 bytes)
	I1129 09:01:36.308689  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:36.309360  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:01:36.328372  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:01:36.345872  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:01:36.363285  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:01:36.380427  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1129 09:01:36.397563  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:01:36.414929  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:01:36.432334  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:01:36.449233  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /usr/share/ca-certificates/2594832.pem (1708 bytes)
	I1129 09:01:36.469085  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:01:36.485869  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem --> /usr/share/ca-certificates/259483.pem (1338 bytes)
	I1129 09:01:36.502784  493486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:01:36.515208  493486 ssh_runner.go:195] Run: openssl version
	I1129 09:01:36.521390  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:01:36.529514  493486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:36.533021  493486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:36.533062  493486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:36.567579  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:01:36.576162  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259483.pem && ln -fs /usr/share/ca-certificates/259483.pem /etc/ssl/certs/259483.pem"
	I1129 09:01:36.584343  493486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259483.pem
	I1129 09:01:36.588122  493486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:35 /usr/share/ca-certificates/259483.pem
	I1129 09:01:36.588176  493486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259483.pem
	I1129 09:01:36.626659  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259483.pem /etc/ssl/certs/51391683.0"
	I1129 09:01:36.635780  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2594832.pem && ln -fs /usr/share/ca-certificates/2594832.pem /etc/ssl/certs/2594832.pem"
	I1129 09:01:36.644862  493486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2594832.pem
	I1129 09:01:36.648851  493486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:35 /usr/share/ca-certificates/2594832.pem
	I1129 09:01:36.648906  493486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2594832.pem
	I1129 09:01:36.691340  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2594832.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:01:36.701173  493486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:01:36.705050  493486 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:01:36.705110  493486 kubeadm.go:401] StartCluster: {Name:old-k8s-version-295154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-295154 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:01:36.705201  493486 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:01:36.705272  493486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:01:36.734535  493486 cri.go:89] found id: ""
	I1129 09:01:36.734592  493486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:01:36.743400  493486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:01:36.751273  493486 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:01:36.751332  493486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:01:36.760386  493486 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:01:36.760404  493486 kubeadm.go:158] found existing configuration files:
	
	I1129 09:01:36.760450  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:01:36.768796  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:01:36.768854  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:01:36.776326  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:01:36.784663  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:01:36.784720  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:01:36.793650  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:01:36.801817  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:01:36.801887  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:01:36.811081  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:01:36.819075  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:01:36.819130  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:01:36.827369  493486 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:01:36.885752  493486 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1129 09:01:36.885824  493486 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:01:36.932588  493486 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:01:36.932993  493486 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:01:36.933139  493486 kubeadm.go:319] OS: Linux
	I1129 09:01:36.933232  493486 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:01:36.933332  493486 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:01:36.933468  493486 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:01:36.933539  493486 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:01:36.933597  493486 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:01:36.933656  493486 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:01:36.933717  493486 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:01:36.933794  493486 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:01:37.018039  493486 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:01:37.018169  493486 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:01:37.018319  493486 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1129 09:01:37.171075  493486 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:01:37.173428  493486 out.go:252]   - Generating certificates and keys ...
	I1129 09:01:37.173535  493486 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:01:37.173613  493486 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:01:37.301964  493486 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:01:37.410711  493486 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:01:37.550821  493486 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:01:37.787553  493486 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:01:37.889172  493486 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:01:37.889414  493486 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-295154] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:01:38.063017  493486 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:01:38.063214  493486 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-295154] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:01:38.202234  493486 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:01:38.262563  493486 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:01:36.787780  494126 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-924441
	
	I1129 09:01:36.787807  494126 ubuntu.go:182] provisioning hostname "no-preload-924441"
	I1129 09:01:36.787868  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:36.808836  494126 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:36.809153  494126 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:01:36.809173  494126 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-924441 && echo "no-preload-924441" | sudo tee /etc/hostname
	I1129 09:01:36.973090  494126 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-924441
	
	I1129 09:01:36.973172  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:36.993095  494126 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:36.993348  494126 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:01:36.993366  494126 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-924441' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-924441/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-924441' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:01:37.147252  494126 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:01:37.147286  494126 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-255825/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-255825/.minikube}
	I1129 09:01:37.147336  494126 ubuntu.go:190] setting up certificates
	I1129 09:01:37.147350  494126 provision.go:84] configureAuth start
	I1129 09:01:37.147407  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-924441
	I1129 09:01:37.167771  494126 provision.go:143] copyHostCerts
	I1129 09:01:37.167841  494126 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem, removing ...
	I1129 09:01:37.167856  494126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem
	I1129 09:01:37.167941  494126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem (1078 bytes)
	I1129 09:01:37.168073  494126 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem, removing ...
	I1129 09:01:37.168087  494126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem
	I1129 09:01:37.168135  494126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem (1123 bytes)
	I1129 09:01:37.168246  494126 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem, removing ...
	I1129 09:01:37.168259  494126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem
	I1129 09:01:37.168304  494126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem (1679 bytes)
	I1129 09:01:37.168383  494126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem org=jenkins.no-preload-924441 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-924441]
	I1129 09:01:37.302569  494126 provision.go:177] copyRemoteCerts
	I1129 09:01:37.302625  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:01:37.302676  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.320965  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.425520  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:01:37.446589  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:01:37.463963  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:01:37.480486  494126 provision.go:87] duration metric: took 333.119398ms to configureAuth
	I1129 09:01:37.480511  494126 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:01:37.480667  494126 config.go:182] Loaded profile config "no-preload-924441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:01:37.480680  494126 machine.go:97] duration metric: took 3.880753165s to provisionDockerMachine
	I1129 09:01:37.480691  494126 client.go:176] duration metric: took 7.771282469s to LocalClient.Create
	I1129 09:01:37.480714  494126 start.go:167] duration metric: took 7.771346771s to libmachine.API.Create "no-preload-924441"
	I1129 09:01:37.480726  494126 start.go:293] postStartSetup for "no-preload-924441" (driver="docker")
	I1129 09:01:37.480750  494126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:01:37.480814  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:01:37.480883  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.498996  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.602864  494126 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:01:37.606394  494126 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:01:37.606428  494126 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:01:37.606439  494126 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/addons for local assets ...
	I1129 09:01:37.606502  494126 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/files for local assets ...
	I1129 09:01:37.606593  494126 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem -> 2594832.pem in /etc/ssl/certs
	I1129 09:01:37.606724  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:01:37.614670  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:37.635134  494126 start.go:296] duration metric: took 154.380805ms for postStartSetup
	I1129 09:01:37.635554  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-924441
	I1129 09:01:37.655528  494126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/config.json ...
	I1129 09:01:37.655850  494126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:01:37.655900  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.677317  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.781275  494126 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:01:37.786042  494126 start.go:128] duration metric: took 8.07881841s to createHost
	I1129 09:01:37.786069  494126 start.go:83] releasing machines lock for "no-preload-924441", held for 8.078998368s
	I1129 09:01:37.786141  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-924441
	I1129 09:01:37.805459  494126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:01:37.805494  494126 ssh_runner.go:195] Run: cat /version.json
	I1129 09:01:37.805552  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.805561  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.824515  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.825042  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.978797  494126 ssh_runner.go:195] Run: systemctl --version
	I1129 09:01:37.985561  494126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:01:37.990121  494126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:01:37.990198  494126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:01:38.014806  494126 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:01:38.014833  494126 start.go:496] detecting cgroup driver to use...
	I1129 09:01:38.014872  494126 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:01:38.014922  494126 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:01:38.028890  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:01:38.040635  494126 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:01:38.040704  494126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:01:38.059274  494126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:01:38.079903  494126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:01:38.160895  494126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:01:38.248638  494126 docker.go:234] disabling docker service ...
	I1129 09:01:38.248693  494126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:01:38.270699  494126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:01:38.283241  494126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:01:38.364018  494126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:01:38.451578  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:01:38.464900  494126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:01:38.478711  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:01:38.488688  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:01:38.497188  494126 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1129 09:01:38.497235  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1129 09:01:38.506143  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:38.514500  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:01:38.522578  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:38.530605  494126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:01:38.538074  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:01:38.546395  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:01:38.554633  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:01:38.564192  494126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:01:38.571328  494126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:01:38.578488  494126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:38.657072  494126 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:01:38.731899  494126 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:01:38.731970  494126 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:01:38.736165  494126 start.go:564] Will wait 60s for crictl version
	I1129 09:01:38.736223  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:38.739821  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:01:38.765727  494126 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:01:38.765799  494126 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:38.788554  494126 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:38.813801  494126 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:01:38.554215  493486 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:01:38.554337  493486 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:01:38.871587  493486 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:01:39.076048  493486 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:01:39.365556  493486 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:01:39.428949  493486 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:01:39.429579  493486 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:01:39.438444  493486 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:01:38.814940  494126 cli_runner.go:164] Run: docker network inspect no-preload-924441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:38.832444  494126 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1129 09:01:38.836556  494126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:38.846826  494126 kubeadm.go:884] updating cluster {Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:01:38.846940  494126 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:01:38.846988  494126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:01:38.875513  494126 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1129 09:01:38.875537  494126 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1129 09:01:38.875606  494126 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:38.875606  494126 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:38.875633  494126 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:38.875642  494126 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:38.875663  494126 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:38.875672  494126 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:38.875613  494126 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1129 09:01:38.875710  494126 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:38.877065  494126 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:38.877082  494126 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:38.877098  494126 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:38.877104  494126 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:38.877132  494126 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:38.877185  494126 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:38.877233  494126 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:38.877189  494126 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1129 09:01:39.045541  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1129 09:01:39.045605  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.049466  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1129 09:01:39.049525  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.055696  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1129 09:01:39.055787  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.065913  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1129 09:01:39.065987  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.071326  494126 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1129 09:01:39.071386  494126 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.071433  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.072494  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1129 09:01:39.072560  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.074055  494126 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1129 09:01:39.074103  494126 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.074155  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.079805  494126 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1129 09:01:39.079853  494126 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.079906  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.090225  494126 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1129 09:01:39.090271  494126 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.090279  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.090318  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.094954  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1129 09:01:39.095016  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.096356  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.096365  494126 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1129 09:01:39.096402  494126 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.096438  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.096440  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.108053  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1129 09:01:39.108111  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1129 09:01:39.125198  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.125300  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.125361  494126 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1129 09:01:39.125408  494126 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.125455  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.128374  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.132565  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.132640  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.138113  494126 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1129 09:01:39.138163  494126 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1129 09:01:39.138200  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.167013  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.167128  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.167330  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.167330  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.167996  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.173113  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.173171  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:01:39.214078  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1129 09:01:39.214193  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:01:39.214389  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.214576  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.220552  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1129 09:01:39.220649  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:01:39.220857  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1129 09:01:39.220895  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1129 09:01:39.222433  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.222493  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:01:39.222587  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1129 09:01:39.222669  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:01:39.275608  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.275622  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1129 09:01:39.275679  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1129 09:01:39.275707  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1129 09:01:39.275716  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:01:39.287672  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:01:39.287708  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1129 09:01:39.287708  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1129 09:01:39.287808  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1129 09:01:39.287825  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:01:39.339051  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1129 09:01:39.339082  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1129 09:01:39.339092  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1129 09:01:39.339110  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1129 09:01:39.339137  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1129 09:01:39.339173  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:01:39.339202  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1129 09:01:39.339317  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1129 09:01:39.424948  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1129 09:01:39.424997  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1129 09:01:39.425030  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1129 09:01:39.425058  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1129 09:01:36.592807  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:36.593240  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:36.593304  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:36.593360  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:36.620981  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:36.621002  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:36.621008  460401 cri.go:89] found id: ""
	I1129 09:01:36.621018  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:36.621079  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.627593  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.632350  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:36.632420  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:36.660070  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:36.660091  460401 cri.go:89] found id: ""
	I1129 09:01:36.660100  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:36.660156  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.664644  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:36.664720  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:36.696935  460401 cri.go:89] found id: ""
	I1129 09:01:36.696967  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.696977  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:36.696985  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:36.697045  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:36.726832  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:36.726857  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:36.726864  460401 cri.go:89] found id: ""
	I1129 09:01:36.726874  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:36.726928  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.732693  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.737783  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:36.737848  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:36.765201  460401 cri.go:89] found id: ""
	I1129 09:01:36.765229  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.765238  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:36.765245  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:36.765300  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:36.795203  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:36.795231  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:36.795237  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:36.795242  460401 cri.go:89] found id: ""
	I1129 09:01:36.795251  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:36.795316  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.801008  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.806325  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.811017  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:36.811088  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:36.840359  460401 cri.go:89] found id: ""
	I1129 09:01:36.840386  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.840397  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:36.840406  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:36.840469  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:36.874045  460401 cri.go:89] found id: ""
	I1129 09:01:36.874068  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.874075  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:36.874085  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:36.874099  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:36.950404  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:36.950426  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:36.950442  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:36.994232  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:36.994264  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:37.049507  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:37.049546  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:37.087133  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:37.087165  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:37.117577  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:37.117602  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:37.154176  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:37.154210  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:37.197090  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:37.197121  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:37.240775  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:37.240811  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:37.269234  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:37.269260  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:37.312948  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:37.312979  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:37.348500  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:37.348527  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:37.435755  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:37.435786  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:39.440026  493486 out.go:252]   - Booting up control plane ...
	I1129 09:01:39.440161  493486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:01:39.440285  493486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:01:39.440970  493486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:01:39.459308  493486 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:01:39.460971  493486 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:01:39.461057  493486 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:01:39.610284  493486 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1129 09:01:39.952440  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:39.952996  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:39.953076  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:39.953145  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:39.990073  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:39.990100  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:39.990107  460401 cri.go:89] found id: ""
	I1129 09:01:39.990117  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:39.990183  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.996871  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.002374  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:40.002458  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:40.036502  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:40.036525  460401 cri.go:89] found id: ""
	I1129 09:01:40.036542  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:40.036600  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.044171  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:40.044261  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:40.084048  460401 cri.go:89] found id: ""
	I1129 09:01:40.084165  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.084184  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:40.084195  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:40.084329  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:40.116869  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:40.116899  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:40.116905  460401 cri.go:89] found id: ""
	I1129 09:01:40.116916  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:40.116982  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.123222  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.128079  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:40.128146  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:40.159071  460401 cri.go:89] found id: ""
	I1129 09:01:40.159101  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.159112  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:40.159120  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:40.159178  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:40.191945  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:40.191973  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:40.191979  460401 cri.go:89] found id: ""
	I1129 09:01:40.191990  460401 logs.go:282] 2 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:40.192055  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.197191  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.202276  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:40.202350  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:40.236481  460401 cri.go:89] found id: ""
	I1129 09:01:40.236510  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.236521  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:40.236528  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:40.236597  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:40.266476  460401 cri.go:89] found id: ""
	I1129 09:01:40.266505  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.266516  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:40.266529  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:40.266547  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:40.310670  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:40.310713  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:40.362446  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:40.362487  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:40.399108  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:40.399138  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:40.435770  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:40.435799  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:40.485497  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:40.485541  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:40.502944  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:40.502977  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:40.592582  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:40.592610  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:40.592626  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:40.634792  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:40.634828  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:40.678348  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:40.678382  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:40.797799  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:40.797849  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:40.854148  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:40.854196  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:43.404360  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:43.404858  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:43.404925  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:43.404996  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:43.435800  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:43.435836  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:43.435843  460401 cri.go:89] found id: ""
	I1129 09:01:43.435854  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:43.435923  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.441287  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.445761  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:43.445837  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:43.474830  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:43.474859  460401 cri.go:89] found id: ""
	I1129 09:01:43.474870  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:43.474932  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.481397  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:43.481483  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:43.513967  460401 cri.go:89] found id: ""
	I1129 09:01:43.513995  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.514006  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:43.514014  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:43.514074  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:43.550388  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:43.550416  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:43.550421  460401 cri.go:89] found id: ""
	I1129 09:01:43.550431  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:43.550505  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.557316  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.563173  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:43.563248  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:43.599482  460401 cri.go:89] found id: ""
	I1129 09:01:43.599524  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.599535  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:43.599545  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:43.599611  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:43.637030  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:43.637053  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:43.637059  460401 cri.go:89] found id: ""
	I1129 09:01:43.637069  460401 logs.go:282] 2 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:43.637130  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.643786  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.650011  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:43.650089  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:43.687244  460401 cri.go:89] found id: ""
	I1129 09:01:43.687273  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.687295  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:43.687303  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:43.687372  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:43.726453  460401 cri.go:89] found id: ""
	I1129 09:01:43.726490  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.726501  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:43.726515  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:43.726533  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:43.795442  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:43.795490  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:43.841417  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:43.841457  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:43.888511  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:43.888554  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:43.930753  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:43.930789  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:44.043358  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:44.043410  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:44.065065  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:44.065107  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:44.112915  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:44.112958  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:44.174077  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:44.174120  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:44.247887  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:44.247909  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:44.247927  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:44.290842  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:44.290882  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:44.335297  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:44.335330  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:39.522040  494126 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1129 09:01:39.522116  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1129 09:01:39.664265  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1129 09:01:39.664314  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:01:39.664386  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:01:40.291377  494126 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1129 09:01:40.291450  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:40.811289  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.146868238s)
	I1129 09:01:40.811331  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1129 09:01:40.811358  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:01:40.811407  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:01:40.811531  494126 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1129 09:01:40.811570  494126 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:40.811610  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:41.858427  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.046983131s)
	I1129 09:01:41.858463  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1129 09:01:41.858488  494126 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:01:41.858484  494126 ssh_runner.go:235] Completed: which crictl: (1.046843529s)
	I1129 09:01:41.858549  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:01:41.858557  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:43.352594  494126 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.494004994s)
	I1129 09:01:43.352634  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.49406142s)
	I1129 09:01:43.352657  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1129 09:01:43.352684  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:01:43.352721  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:43.352741  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:01:44.495181  494126 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.142420788s)
	I1129 09:01:44.495251  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.142485031s)
	I1129 09:01:44.495274  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:44.495280  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1129 09:01:44.495307  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:01:44.495357  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:01:44.611298  493486 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002099 seconds
	I1129 09:01:44.611461  493486 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:01:44.626505  493486 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:01:45.150669  493486 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:01:45.150981  493486 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-295154 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:01:45.666153  493486 kubeadm.go:319] [bootstrap-token] Using token: fc3siq.brm7sjv6bjwb7j34
	I1129 09:01:45.667757  493486 out.go:252]   - Configuring RBAC rules ...
	I1129 09:01:45.667991  493486 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:01:45.673404  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:01:45.685336  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:01:45.691974  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:01:45.695311  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:01:45.698699  493486 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:01:45.712796  493486 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:01:45.913473  493486 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:01:46.081267  493486 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:01:46.081993  493486 kubeadm.go:319] 
	I1129 09:01:46.082087  493486 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:01:46.082095  493486 kubeadm.go:319] 
	I1129 09:01:46.082160  493486 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:01:46.082179  493486 kubeadm.go:319] 
	I1129 09:01:46.082199  493486 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:01:46.082251  493486 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:01:46.082302  493486 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:01:46.082308  493486 kubeadm.go:319] 
	I1129 09:01:46.082372  493486 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:01:46.082377  493486 kubeadm.go:319] 
	I1129 09:01:46.082434  493486 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:01:46.082445  493486 kubeadm.go:319] 
	I1129 09:01:46.082520  493486 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:01:46.082627  493486 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:01:46.082750  493486 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:01:46.082756  493486 kubeadm.go:319] 
	I1129 09:01:46.082891  493486 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:01:46.083019  493486 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:01:46.083030  493486 kubeadm.go:319] 
	I1129 09:01:46.083149  493486 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fc3siq.brm7sjv6bjwb7j34 \
	I1129 09:01:46.083319  493486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 \
	I1129 09:01:46.083366  493486 kubeadm.go:319] 	--control-plane 
	I1129 09:01:46.083383  493486 kubeadm.go:319] 
	I1129 09:01:46.083539  493486 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:01:46.083561  493486 kubeadm.go:319] 
	I1129 09:01:46.083696  493486 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fc3siq.brm7sjv6bjwb7j34 \
	I1129 09:01:46.083889  493486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 
	I1129 09:01:46.087692  493486 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:01:46.087874  493486 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:01:46.087925  493486 cni.go:84] Creating CNI manager for ""
	I1129 09:01:46.087942  493486 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:46.089437  493486 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:01:46.093295  493486 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:01:46.100033  493486 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1129 09:01:46.100061  493486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:01:46.118046  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:01:47.108562  493486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:01:47.108767  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:47.108838  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-295154 minikube.k8s.io/updated_at=2025_11_29T09_01_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=old-k8s-version-295154 minikube.k8s.io/primary=true
	I1129 09:01:47.209163  493486 ops.go:34] apiserver oom_adj: -16
	I1129 09:01:47.209168  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:47.709726  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:48.209857  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:44.521775  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1129 09:01:44.521916  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:01:45.636811  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.141419574s)
	I1129 09:01:45.636849  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1129 09:01:45.636857  494126 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.114924181s)
	I1129 09:01:45.636879  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1129 09:01:45.636882  494126 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:01:45.636902  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1129 09:01:45.636924  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:01:48.452908  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.815950505s)
	I1129 09:01:48.452936  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1129 09:01:48.452972  494126 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:01:48.453041  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:01:49.370622  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1129 09:01:49.370663  494126 cache_images.go:125] Successfully loaded all cached images
	I1129 09:01:49.370668  494126 cache_images.go:94] duration metric: took 10.495116704s to LoadCachedImages
	I1129 09:01:49.370682  494126 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1129 09:01:49.370811  494126 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-924441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:01:49.370873  494126 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:01:49.397690  494126 cni.go:84] Creating CNI manager for ""
	I1129 09:01:49.397714  494126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:49.397740  494126 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:01:49.397786  494126 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-924441 NodeName:no-preload-924441 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:01:49.397929  494126 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-924441"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:01:49.397999  494126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:01:49.407101  494126 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1129 09:01:49.407180  494126 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1129 09:01:49.415958  494126 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1129 09:01:49.415978  494126 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256
	I1129 09:01:49.416026  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:01:49.416047  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1129 09:01:49.415978  494126 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I1129 09:01:49.416149  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1129 09:01:49.429834  494126 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1129 09:01:49.429872  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1129 09:01:49.429915  494126 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1129 09:01:49.429924  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1129 09:01:49.429943  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1129 09:01:49.438987  494126 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1129 09:01:49.439024  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1129 09:01:46.884140  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:48.710027  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:49.210030  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:49.709395  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:50.209866  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:50.709354  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:51.209979  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:51.710291  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:52.209895  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:52.709970  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:53.209937  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:49.969644  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:01:49.978574  494126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1129 09:01:49.992833  494126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:01:50.009876  494126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1129 09:01:50.023695  494126 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:01:50.027747  494126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:50.038376  494126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:50.121247  494126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:01:50.149394  494126 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441 for IP: 192.168.103.2
	I1129 09:01:50.149417  494126 certs.go:195] generating shared ca certs ...
	I1129 09:01:50.149438  494126 certs.go:227] acquiring lock for ca certs: {Name:mk5e6bcae0a6944966b241f3c6197a472703c991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.149602  494126 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key
	I1129 09:01:50.149703  494126 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key
	I1129 09:01:50.149717  494126 certs.go:257] generating profile certs ...
	I1129 09:01:50.149797  494126 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.key
	I1129 09:01:50.149812  494126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt with IP's: []
	I1129 09:01:50.352856  494126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt ...
	I1129 09:01:50.352896  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: {Name:mk24ad5255d5c075502606493622eaafcc9932fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.353102  494126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.key ...
	I1129 09:01:50.353115  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.key: {Name:mkdb2263ef25fafc1ea0385357022f8199c8aa35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.353223  494126 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b
	I1129 09:01:50.353240  494126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1129 09:01:50.513341  494126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b ...
	I1129 09:01:50.513379  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b: {Name:mk3f760c06958b6df21bcc9bde3527a0c97ad882 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.513582  494126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b ...
	I1129 09:01:50.513601  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b: {Name:mk4c8be15a8f6eca407c52c7afdc7ecb10357a29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.513678  494126 certs.go:382] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt
	I1129 09:01:50.513771  494126 certs.go:386] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key
	I1129 09:01:50.513831  494126 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key
	I1129 09:01:50.513847  494126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt with IP's: []
	I1129 09:01:50.651114  494126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt ...
	I1129 09:01:50.651146  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt: {Name:mkbdace4e62ecdfbe11ae904155295b956ffc842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.651330  494126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key ...
	I1129 09:01:50.651343  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key: {Name:mk14d837fb2449197c689047daf9f07db1da4b8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.651522  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem (1338 bytes)
	W1129 09:01:50.651563  494126 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483_empty.pem, impossibly tiny 0 bytes
	I1129 09:01:50.651573  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:01:50.651652  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:01:50.651691  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:01:50.651714  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem (1679 bytes)
	I1129 09:01:50.651769  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:50.652337  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:01:50.672071  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:01:50.691184  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:01:50.711306  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:01:50.730860  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:01:50.750662  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1129 09:01:50.771690  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:01:50.791789  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:01:50.811356  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem --> /usr/share/ca-certificates/259483.pem (1338 bytes)
	I1129 09:01:50.833983  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /usr/share/ca-certificates/2594832.pem (1708 bytes)
	I1129 09:01:50.853036  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:01:50.871262  494126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:01:50.885099  494126 ssh_runner.go:195] Run: openssl version
	I1129 09:01:50.892072  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259483.pem && ln -fs /usr/share/ca-certificates/259483.pem /etc/ssl/certs/259483.pem"
	I1129 09:01:50.901864  494126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259483.pem
	I1129 09:01:50.906616  494126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:35 /usr/share/ca-certificates/259483.pem
	I1129 09:01:50.906675  494126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259483.pem
	I1129 09:01:50.943595  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259483.pem /etc/ssl/certs/51391683.0"
	I1129 09:01:50.953459  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2594832.pem && ln -fs /usr/share/ca-certificates/2594832.pem /etc/ssl/certs/2594832.pem"
	I1129 09:01:50.962610  494126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2594832.pem
	I1129 09:01:50.966703  494126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:35 /usr/share/ca-certificates/2594832.pem
	I1129 09:01:50.966778  494126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2594832.pem
	I1129 09:01:51.002253  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2594832.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:01:51.012487  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:01:51.022391  494126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:51.026710  494126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:51.026814  494126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:51.063394  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:01:51.073278  494126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:01:51.077328  494126 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:01:51.077396  494126 kubeadm.go:401] StartCluster: {Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:01:51.077489  494126 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:01:51.077532  494126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:01:51.106096  494126 cri.go:89] found id: ""
	I1129 09:01:51.106183  494126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:01:51.115333  494126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:01:51.123937  494126 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:01:51.124003  494126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:01:51.132534  494126 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:01:51.132560  494126 kubeadm.go:158] found existing configuration files:
	
	I1129 09:01:51.132605  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:01:51.140877  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:01:51.140937  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:01:51.149370  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:01:51.157660  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:01:51.157716  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:01:51.165600  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:01:51.173968  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:01:51.174023  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:01:51.182141  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:01:51.190488  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:01:51.190548  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:01:51.198568  494126 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:01:51.257848  494126 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:01:51.317135  494126 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:01:51.885035  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1129 09:01:51.885110  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:51.885188  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:51.917617  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:01:51.917638  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:51.917644  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:51.917647  460401 cri.go:89] found id: ""
	I1129 09:01:51.917655  460401 logs.go:282] 3 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:51.917717  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.923877  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.929304  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.934465  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:51.934561  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:51.963685  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:51.963708  460401 cri.go:89] found id: ""
	I1129 09:01:51.963719  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:51.963801  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.968956  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:51.969028  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:51.996971  460401 cri.go:89] found id: ""
	I1129 09:01:51.997000  460401 logs.go:282] 0 containers: []
	W1129 09:01:51.997007  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:51.997013  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:51.997078  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:52.028822  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:52.028850  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:52.028856  460401 cri.go:89] found id: ""
	I1129 09:01:52.028866  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:52.028936  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.034812  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.039943  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:52.040009  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:52.069835  460401 cri.go:89] found id: ""
	I1129 09:01:52.069866  460401 logs.go:282] 0 containers: []
	W1129 09:01:52.069878  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:52.069886  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:52.069952  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:52.104321  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:52.104340  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:52.104344  460401 cri.go:89] found id: ""
	I1129 09:01:52.104352  460401 logs.go:282] 2 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:52.104402  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.109901  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.114778  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:52.114862  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:52.144981  460401 cri.go:89] found id: ""
	I1129 09:01:52.145005  460401 logs.go:282] 0 containers: []
	W1129 09:01:52.145013  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:52.145019  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:52.145069  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:52.174604  460401 cri.go:89] found id: ""
	I1129 09:01:52.174632  460401 logs.go:282] 0 containers: []
	W1129 09:01:52.174641  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:52.174651  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:52.174665  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:52.207427  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:52.207458  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:52.249558  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:52.249600  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:52.300742  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:52.300785  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:52.385321  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:52.385365  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:52.405491  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:52.405533  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:52.448465  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:52.448502  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:52.489466  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:52.489506  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:52.534107  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:52.534146  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:52.572361  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:52.572401  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:52.606656  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:52.606692  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1129 09:01:53.710005  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:54.209471  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:54.709414  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:55.209967  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:55.709378  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:56.210032  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:56.709982  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:57.209266  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:57.709968  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:58.209425  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:58.303052  493486 kubeadm.go:1114] duration metric: took 11.19438409s to wait for elevateKubeSystemPrivileges
	I1129 09:01:58.303107  493486 kubeadm.go:403] duration metric: took 21.598001105s to StartCluster
	I1129 09:01:58.303162  493486 settings.go:142] acquiring lock: {Name:mk6dbed29e5e99d89b1cbbd9e561d8f8791ae9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:58.303278  493486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:01:58.305561  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:58.305924  493486 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:01:58.306112  493486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:01:58.306351  493486 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:01:58.306713  493486 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-295154"
	I1129 09:01:58.306776  493486 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-295154"
	I1129 09:01:58.306795  493486 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-295154"
	I1129 09:01:58.306776  493486 config.go:182] Loaded profile config "old-k8s-version-295154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:01:58.306807  493486 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-295154"
	I1129 09:01:58.306834  493486 host.go:66] Checking if "old-k8s-version-295154" exists ...
	I1129 09:01:58.307864  493486 out.go:179] * Verifying Kubernetes components...
	I1129 09:01:58.307930  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:58.308039  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:58.309327  493486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:58.335085  493486 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-295154"
	I1129 09:01:58.335144  493486 host.go:66] Checking if "old-k8s-version-295154" exists ...
	I1129 09:01:58.335642  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:58.337139  493486 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:58.338693  493486 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:01:58.338716  493486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:01:58.338899  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:58.368947  493486 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:01:58.368979  493486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:01:58.369072  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:58.378680  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:58.399464  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:58.438617  493486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:01:58.498671  493486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:01:58.528524  493486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:01:58.536443  493486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:01:58.718007  493486 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 09:01:58.719713  493486 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-295154" to be "Ready" ...
	I1129 09:01:58.976512  493486 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 09:02:01.574795  494126 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:02:01.574869  494126 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:02:01.575071  494126 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:02:01.575154  494126 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:02:01.575204  494126 kubeadm.go:319] OS: Linux
	I1129 09:02:01.575304  494126 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:02:01.575403  494126 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:02:01.575496  494126 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:02:01.575567  494126 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:02:01.575645  494126 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:02:01.575713  494126 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:02:01.575809  494126 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:02:01.575872  494126 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:02:01.575964  494126 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:02:01.576092  494126 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:02:01.576217  494126 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:02:01.576325  494126 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:02:01.578171  494126 out.go:252]   - Generating certificates and keys ...
	I1129 09:02:01.578298  494126 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:02:01.578401  494126 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:02:01.578499  494126 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:02:01.578589  494126 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:02:01.578680  494126 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:02:01.578785  494126 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:02:01.578876  494126 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:02:01.579019  494126 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-924441] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1129 09:02:01.579122  494126 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:02:01.579311  494126 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-924441] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1129 09:02:01.579420  494126 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:02:01.579532  494126 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:02:01.579609  494126 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:02:01.579696  494126 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:02:01.579806  494126 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:02:01.579894  494126 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:02:01.579971  494126 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:02:01.580076  494126 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:02:01.580125  494126 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:02:01.580195  494126 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:02:01.580259  494126 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:02:01.582121  494126 out.go:252]   - Booting up control plane ...
	I1129 09:02:01.582267  494126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:02:01.582364  494126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:02:01.582460  494126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:02:01.582603  494126 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:02:01.582773  494126 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:02:01.582902  494126 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:02:01.583026  494126 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:02:01.583068  494126 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:02:01.583182  494126 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:02:01.583325  494126 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:02:01.583413  494126 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001845652s
	I1129 09:02:01.583537  494126 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:02:01.583671  494126 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1129 09:02:01.583787  494126 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:02:01.583879  494126 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:02:01.583985  494126 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.852889014s
	I1129 09:02:01.584071  494126 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.023243656s
	I1129 09:02:01.584163  494126 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00195345s
	I1129 09:02:01.584314  494126 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:02:01.584493  494126 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:02:01.584584  494126 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:02:01.584867  494126 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-924441 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:02:01.584955  494126 kubeadm.go:319] [bootstrap-token] Using token: mvtuq7.pg2byk8o9fh5nfa2
	I1129 09:02:01.587787  494126 out.go:252]   - Configuring RBAC rules ...
	I1129 09:02:01.587916  494126 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:02:01.588028  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:02:01.588232  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:02:01.588384  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:02:01.588517  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:02:01.588635  494126 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:02:01.588779  494126 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:02:01.588837  494126 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:02:01.588907  494126 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:02:01.588916  494126 kubeadm.go:319] 
	I1129 09:02:01.589016  494126 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:02:01.589032  494126 kubeadm.go:319] 
	I1129 09:02:01.589151  494126 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:02:01.589160  494126 kubeadm.go:319] 
	I1129 09:02:01.589205  494126 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:02:01.589280  494126 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:02:01.589374  494126 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:02:01.589388  494126 kubeadm.go:319] 
	I1129 09:02:01.589465  494126 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:02:01.589473  494126 kubeadm.go:319] 
	I1129 09:02:01.589554  494126 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:02:01.589563  494126 kubeadm.go:319] 
	I1129 09:02:01.589607  494126 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:02:01.589671  494126 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:02:01.589782  494126 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:02:01.589795  494126 kubeadm.go:319] 
	I1129 09:02:01.589906  494126 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:02:01.590049  494126 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:02:01.590058  494126 kubeadm.go:319] 
	I1129 09:02:01.590132  494126 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mvtuq7.pg2byk8o9fh5nfa2 \
	I1129 09:02:01.590268  494126 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 \
	I1129 09:02:01.590302  494126 kubeadm.go:319] 	--control-plane 
	I1129 09:02:01.590309  494126 kubeadm.go:319] 
	I1129 09:02:01.590425  494126 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:02:01.590434  494126 kubeadm.go:319] 
	I1129 09:02:01.590567  494126 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mvtuq7.pg2byk8o9fh5nfa2 \
	I1129 09:02:01.590744  494126 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 
	I1129 09:02:01.590761  494126 cni.go:84] Creating CNI manager for ""
	I1129 09:02:01.590770  494126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:02:01.592367  494126 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:01:58.977447  493486 addons.go:530] duration metric: took 671.096745ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:01:59.226693  493486 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-295154" context rescaled to 1 replicas
	W1129 09:02:00.723077  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	W1129 09:02:02.723240  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	I1129 09:02:01.593492  494126 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:02:01.598544  494126 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:02:01.598567  494126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:02:01.615144  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:02:01.883935  494126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:02:01.884024  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:01.884114  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-924441 minikube.k8s.io/updated_at=2025_11_29T09_02_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=no-preload-924441 minikube.k8s.io/primary=true
	I1129 09:02:01.969638  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:01.982178  494126 ops.go:34] apiserver oom_adj: -16
	I1129 09:02:02.470301  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:02.969878  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:03.470379  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:03.970554  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:04.469853  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:02.669495  460401 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062771993s)
	W1129 09:02:02.669547  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1129 09:02:02.669577  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:02.669596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:02.710559  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:02.710605  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:04.970119  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:05.470767  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:05.969852  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:06.052010  494126 kubeadm.go:1114] duration metric: took 4.168052566s to wait for elevateKubeSystemPrivileges
	I1129 09:02:06.052057  494126 kubeadm.go:403] duration metric: took 14.974666914s to StartCluster
	I1129 09:02:06.052081  494126 settings.go:142] acquiring lock: {Name:mk6dbed29e5e99d89b1cbbd9e561d8f8791ae9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:02:06.052174  494126 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:02:06.054258  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:02:06.054571  494126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:02:06.054563  494126 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:02:06.054635  494126 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:02:06.054874  494126 config.go:182] Loaded profile config "no-preload-924441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:02:06.054888  494126 addons.go:70] Setting storage-provisioner=true in profile "no-preload-924441"
	I1129 09:02:06.054933  494126 addons.go:70] Setting default-storageclass=true in profile "no-preload-924441"
	I1129 09:02:06.054947  494126 addons.go:239] Setting addon storage-provisioner=true in "no-preload-924441"
	I1129 09:02:06.054963  494126 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-924441"
	I1129 09:02:06.055012  494126 host.go:66] Checking if "no-preload-924441" exists ...
	I1129 09:02:06.055424  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:02:06.055667  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:02:06.056967  494126 out.go:179] * Verifying Kubernetes components...
	I1129 09:02:06.060417  494126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:02:06.083076  494126 addons.go:239] Setting addon default-storageclass=true in "no-preload-924441"
	I1129 09:02:06.083127  494126 host.go:66] Checking if "no-preload-924441" exists ...
	I1129 09:02:06.083615  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:02:06.086028  494126 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:02:06.087100  494126 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:02:06.087121  494126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:02:06.087200  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:02:06.110337  494126 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:02:06.110366  494126 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:02:06.111183  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:02:06.116769  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:02:06.140007  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:02:06.151655  494126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:02:06.208406  494126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:02:06.241470  494126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:02:06.273558  494126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:02:06.324896  494126 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1129 09:02:06.327889  494126 node_ready.go:35] waiting up to 6m0s for node "no-preload-924441" to be "Ready" ...
	I1129 09:02:06.574594  494126 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1129 09:02:05.223590  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	W1129 09:02:07.223929  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	I1129 09:02:06.575644  494126 addons.go:530] duration metric: took 521.007476ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:02:06.830448  494126 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-924441" context rescaled to 1 replicas
	W1129 09:02:08.331406  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:05.259668  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:07.201576  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:43246->192.168.85.2:8443: read: connection reset by peer
	I1129 09:02:07.201690  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:07.201778  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:07.234753  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:07.234781  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:02:07.234788  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:07.234793  460401 cri.go:89] found id: ""
	I1129 09:02:07.234804  460401 logs.go:282] 3 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:07.234869  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.240257  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.245641  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.251131  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:07.251196  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:07.280579  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:07.280608  460401 cri.go:89] found id: ""
	I1129 09:02:07.280621  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:07.280682  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.286123  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:07.286213  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:07.317491  460401 cri.go:89] found id: ""
	I1129 09:02:07.317519  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.317528  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:07.317534  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:07.317586  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:07.347513  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:07.347534  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:07.347538  460401 cri.go:89] found id: ""
	I1129 09:02:07.347546  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:07.347610  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.353144  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.358223  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:07.358303  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:07.387488  460401 cri.go:89] found id: ""
	I1129 09:02:07.387516  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.387525  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:07.387532  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:07.387595  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:07.418490  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:07.418512  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:07.418516  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:07.418519  460401 cri.go:89] found id: ""
	I1129 09:02:07.418527  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:07.418587  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.423956  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.429140  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.434196  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:07.434281  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:07.463114  460401 cri.go:89] found id: ""
	I1129 09:02:07.463138  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.463148  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:07.463156  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:07.463222  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:07.494533  460401 cri.go:89] found id: ""
	I1129 09:02:07.494567  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.494579  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:07.494592  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:07.494604  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:07.546238  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:07.546282  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:07.634664  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:07.634702  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:07.696753  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:07.696779  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:07.696796  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:07.733303  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:07.733343  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:07.786770  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:07.786809  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:07.824791  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:07.824831  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:07.857029  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:07.857058  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:07.892009  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:07.892046  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:07.907552  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:02:07.907596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	W1129 09:02:07.937558  460401 logs.go:130] failed kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095": Process exited with status 1
	stdout:
	
	stderr:
	E1129 09:02:07.934436    4413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found" containerID="5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	time="2025-11-29T09:02:07Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found"
	 output: 
	** stderr ** 
	E1129 09:02:07.934436    4413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found" containerID="5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	time="2025-11-29T09:02:07Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found"
	
	** /stderr **
	I1129 09:02:07.937577  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:07.937591  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:07.976501  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:07.976553  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:08.017968  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:08.018008  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:08.049057  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:08.049090  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	W1129 09:02:09.723662  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	W1129 09:02:12.223024  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	I1129 09:02:13.224090  493486 node_ready.go:49] node "old-k8s-version-295154" is "Ready"
	I1129 09:02:13.224128  493486 node_ready.go:38] duration metric: took 14.504358398s for node "old-k8s-version-295154" to be "Ready" ...
	I1129 09:02:13.224148  493486 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:02:13.224211  493486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:02:13.243313  493486 api_server.go:72] duration metric: took 14.93733902s to wait for apiserver process to appear ...
	I1129 09:02:13.243343  493486 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:02:13.243370  493486 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:02:13.250694  493486 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:02:13.251984  493486 api_server.go:141] control plane version: v1.28.0
	I1129 09:02:13.252015  493486 api_server.go:131] duration metric: took 8.663278ms to wait for apiserver health ...
	I1129 09:02:13.252026  493486 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:02:13.255767  493486 system_pods.go:59] 8 kube-system pods found
	I1129 09:02:13.255813  493486 system_pods.go:61] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.255822  493486 system_pods.go:61] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.255829  493486 system_pods.go:61] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.255835  493486 system_pods.go:61] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.255841  493486 system_pods.go:61] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.255847  493486 system_pods.go:61] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.255853  493486 system_pods.go:61] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.255860  493486 system_pods.go:61] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.255869  493486 system_pods.go:74] duration metric: took 3.834915ms to wait for pod list to return data ...
	I1129 09:02:13.255879  493486 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:02:13.259936  493486 default_sa.go:45] found service account: "default"
	I1129 09:02:13.259965  493486 default_sa.go:55] duration metric: took 4.078247ms for default service account to be created ...
	I1129 09:02:13.259977  493486 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:02:13.264489  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:13.264528  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.264536  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.264545  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.264554  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.264562  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.264567  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.264572  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.264586  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.264615  493486 retry.go:31] will retry after 309.906184ms: missing components: kube-dns
	W1129 09:02:10.832100  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	W1129 09:02:13.330706  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:10.584596  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:10.585082  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:10.585139  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:10.585192  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:10.615813  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:10.615833  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:10.615837  460401 cri.go:89] found id: ""
	I1129 09:02:10.615846  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:10.615910  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.621079  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.625927  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:10.626017  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:10.655780  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:10.655808  460401 cri.go:89] found id: ""
	I1129 09:02:10.655817  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:10.655877  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.661197  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:10.661278  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:10.692401  460401 cri.go:89] found id: ""
	I1129 09:02:10.692423  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.692431  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:10.692436  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:10.692496  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:10.721278  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:10.721303  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:10.721309  460401 cri.go:89] found id: ""
	I1129 09:02:10.721320  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:10.721387  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.726913  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.731556  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:10.731637  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:10.759345  460401 cri.go:89] found id: ""
	I1129 09:02:10.759373  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.759381  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:10.759386  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:10.759446  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:10.790190  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:10.790215  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:10.790221  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:10.790226  460401 cri.go:89] found id: ""
	I1129 09:02:10.790236  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:10.790305  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.795588  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.800622  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.805263  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:10.805338  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:10.834942  460401 cri.go:89] found id: ""
	I1129 09:02:10.834973  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.834991  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:10.834999  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:10.835065  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:10.872503  460401 cri.go:89] found id: ""
	I1129 09:02:10.872536  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.872547  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:10.872562  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:10.872586  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:10.926644  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:10.926681  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:10.965025  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:10.965069  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:10.998068  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:10.998102  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:11.043686  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:11.043743  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:11.134380  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:11.134422  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:11.150475  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:11.150510  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:11.210329  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:11.210348  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:11.210364  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:11.250422  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:11.250457  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:11.280219  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:11.280255  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:11.315565  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:11.315596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:11.349327  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:11.349358  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:11.384696  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:11.384729  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:13.923850  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:13.924341  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:13.924398  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:13.924461  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:13.954410  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:13.954430  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:13.954434  460401 cri.go:89] found id: ""
	I1129 09:02:13.954442  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:13.954501  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:13.959624  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:13.964312  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:13.964377  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:13.992596  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:13.992625  460401 cri.go:89] found id: ""
	I1129 09:02:13.992636  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:13.992703  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:13.998893  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:13.998972  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:14.028106  460401 cri.go:89] found id: ""
	I1129 09:02:14.028140  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.028152  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:14.028161  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:14.028230  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:14.057393  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:14.057414  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:14.057418  460401 cri.go:89] found id: ""
	I1129 09:02:14.057427  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:14.057482  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.062623  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.067579  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:14.067654  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:14.102801  460401 cri.go:89] found id: ""
	I1129 09:02:14.102840  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.102853  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:14.102860  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:14.102925  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:14.135951  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:14.135979  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:14.135985  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:14.135988  460401 cri.go:89] found id: ""
	I1129 09:02:14.135998  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:14.136064  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.141983  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.147316  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.152463  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:14.152555  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:14.181365  460401 cri.go:89] found id: ""
	I1129 09:02:14.181398  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.181409  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:14.181417  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:14.181477  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:14.210267  460401 cri.go:89] found id: ""
	I1129 09:02:14.210292  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.210300  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:14.210310  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:14.210323  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:14.298625  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:14.298662  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:14.315504  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:14.315529  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:14.357098  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:14.357134  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:14.407082  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:14.407133  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:14.441442  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:14.441482  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:14.476419  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:14.476452  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:13.579150  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:13.579183  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.579189  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.579195  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.579199  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.579203  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.579206  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.579210  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.579220  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.579237  493486 retry.go:31] will retry after 360.039109ms: missing components: kube-dns
	I1129 09:02:13.944039  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:13.944084  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.944094  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.944104  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.944110  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.944116  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.944121  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.944127  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.944133  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.944166  493486 retry.go:31] will retry after 339.658127ms: missing components: kube-dns
	I1129 09:02:14.288499  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:14.288533  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Running
	I1129 09:02:14.288543  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:14.288548  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:14.288553  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:14.288563  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:14.288568  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:14.288573  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:14.288578  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Running
	I1129 09:02:14.288588  493486 system_pods.go:126] duration metric: took 1.028603527s to wait for k8s-apps to be running ...
	I1129 09:02:14.288601  493486 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:02:14.288662  493486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:02:14.302535  493486 system_svc.go:56] duration metric: took 13.922382ms WaitForService to wait for kubelet
	I1129 09:02:14.302570  493486 kubeadm.go:587] duration metric: took 15.996603485s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:02:14.302594  493486 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:02:14.305508  493486 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:02:14.305535  493486 node_conditions.go:123] node cpu capacity is 8
	I1129 09:02:14.305552  493486 node_conditions.go:105] duration metric: took 2.953214ms to run NodePressure ...
	I1129 09:02:14.305564  493486 start.go:242] waiting for startup goroutines ...
	I1129 09:02:14.305570  493486 start.go:247] waiting for cluster config update ...
	I1129 09:02:14.305583  493486 start.go:256] writing updated cluster config ...
	I1129 09:02:14.305887  493486 ssh_runner.go:195] Run: rm -f paused
	I1129 09:02:14.309803  493486 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:14.314558  493486 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-phw28" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.319446  493486 pod_ready.go:94] pod "coredns-5dd5756b68-phw28" is "Ready"
	I1129 09:02:14.319479  493486 pod_ready.go:86] duration metric: took 4.889509ms for pod "coredns-5dd5756b68-phw28" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.322499  493486 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.326608  493486 pod_ready.go:94] pod "etcd-old-k8s-version-295154" is "Ready"
	I1129 09:02:14.326631  493486 pod_ready.go:86] duration metric: took 4.109693ms for pod "etcd-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.329352  493486 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.333844  493486 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-295154" is "Ready"
	I1129 09:02:14.333867  493486 pod_ready.go:86] duration metric: took 4.49688ms for pod "kube-apiserver-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.336686  493486 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.714439  493486 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-295154" is "Ready"
	I1129 09:02:14.714472  493486 pod_ready.go:86] duration metric: took 377.765984ms for pod "kube-controller-manager-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.915822  493486 pod_ready.go:83] waiting for pod "kube-proxy-4rfb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.314552  493486 pod_ready.go:94] pod "kube-proxy-4rfb4" is "Ready"
	I1129 09:02:15.314586  493486 pod_ready.go:86] duration metric: took 398.736001ms for pod "kube-proxy-4rfb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.515989  493486 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.913869  493486 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-295154" is "Ready"
	I1129 09:02:15.913896  493486 pod_ready.go:86] duration metric: took 397.877691ms for pod "kube-scheduler-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.913908  493486 pod_ready.go:40] duration metric: took 1.604073956s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:15.959941  493486 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1129 09:02:15.961883  493486 out.go:203] 
	W1129 09:02:15.963183  493486 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1129 09:02:15.964449  493486 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1129 09:02:15.966035  493486 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-295154" cluster and "default" namespace by default
	W1129 09:02:15.330798  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	W1129 09:02:17.331851  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:14.509454  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:14.509484  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:14.571273  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:14.571298  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:14.571312  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:14.605440  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:14.605476  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:14.642678  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:14.642712  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:14.671483  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:14.671514  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:14.701619  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:14.701647  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:17.246912  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:17.247337  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:17.247422  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:17.247479  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:17.277610  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:17.277632  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:17.277637  460401 cri.go:89] found id: ""
	I1129 09:02:17.277647  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:17.277711  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.283531  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.288554  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:17.288644  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:17.316819  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:17.316847  460401 cri.go:89] found id: ""
	I1129 09:02:17.316857  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:17.316921  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.322640  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:17.322770  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:17.353531  460401 cri.go:89] found id: ""
	I1129 09:02:17.353563  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.353575  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:17.353585  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:17.353651  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:17.384830  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:17.384854  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:17.384858  460401 cri.go:89] found id: ""
	I1129 09:02:17.384867  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:17.384932  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.390132  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.395096  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:17.395177  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:17.425643  460401 cri.go:89] found id: ""
	I1129 09:02:17.425681  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.425692  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:17.425704  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:17.425788  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:17.456077  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:17.456105  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:17.456113  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:17.456136  460401 cri.go:89] found id: ""
	I1129 09:02:17.456148  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:17.456213  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.461610  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.466727  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.471762  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:17.471849  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:17.501750  460401 cri.go:89] found id: ""
	I1129 09:02:17.501782  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.501793  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:17.501801  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:17.501868  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:17.531903  460401 cri.go:89] found id: ""
	I1129 09:02:17.531932  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.531942  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:17.531956  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:17.531972  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:17.630517  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:17.630566  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:17.667169  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:17.667205  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:17.707311  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:17.707360  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:17.746580  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:17.746621  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:17.799162  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:17.799207  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:17.839313  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:17.839355  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:17.872700  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:17.872742  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:17.904806  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:17.904838  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:17.920866  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:17.920904  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:17.983002  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:17.983027  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:17.983040  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:18.019203  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:18.019241  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:18.070893  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:18.070936  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1129 09:02:19.830479  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:20.833313  494126 node_ready.go:49] node "no-preload-924441" is "Ready"
	I1129 09:02:20.833355  494126 node_ready.go:38] duration metric: took 14.505431475s for node "no-preload-924441" to be "Ready" ...
	I1129 09:02:20.833377  494126 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:02:20.833445  494126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:02:20.850134  494126 api_server.go:72] duration metric: took 14.795523765s to wait for apiserver process to appear ...
	I1129 09:02:20.850165  494126 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:02:20.850190  494126 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1129 09:02:20.856514  494126 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1129 09:02:20.857900  494126 api_server.go:141] control plane version: v1.34.1
	I1129 09:02:20.857933  494126 api_server.go:131] duration metric: took 7.759312ms to wait for apiserver health ...
	I1129 09:02:20.857945  494126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:02:20.861811  494126 system_pods.go:59] 8 kube-system pods found
	I1129 09:02:20.861851  494126 system_pods.go:61] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:20.861863  494126 system_pods.go:61] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:20.861871  494126 system_pods.go:61] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:20.861877  494126 system_pods.go:61] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:20.861892  494126 system_pods.go:61] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:20.861897  494126 system_pods.go:61] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:20.861902  494126 system_pods.go:61] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:20.861912  494126 system_pods.go:61] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:20.861920  494126 system_pods.go:74] duration metric: took 3.967151ms to wait for pod list to return data ...
	I1129 09:02:20.861931  494126 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:02:20.864542  494126 default_sa.go:45] found service account: "default"
	I1129 09:02:20.864569  494126 default_sa.go:55] duration metric: took 2.631761ms for default service account to be created ...
	I1129 09:02:20.864581  494126 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:02:20.867876  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:20.867913  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:20.867924  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:20.867932  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:20.867938  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:20.867999  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:20.868005  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:20.868011  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:20.868027  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:20.868077  494126 retry.go:31] will retry after 292.54579ms: missing components: kube-dns
	I1129 09:02:21.165357  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:21.165399  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:21.165408  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:21.165416  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:21.165422  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:21.165428  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:21.165434  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:21.165439  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:21.165449  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:21.165470  494126 retry.go:31] will retry after 336.406198ms: missing components: kube-dns
	I1129 09:02:21.505471  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:21.505510  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:21.505516  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:21.505524  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:21.505528  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:21.505531  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:21.505534  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:21.505538  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:21.505542  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:21.505560  494126 retry.go:31] will retry after 447.535618ms: missing components: kube-dns
	I1129 09:02:21.957409  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:21.957439  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Running
	I1129 09:02:21.957444  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:21.957448  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:21.957451  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:21.957456  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:21.957459  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:21.957464  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:21.957467  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Running
	I1129 09:02:21.957476  494126 system_pods.go:126] duration metric: took 1.092887723s to wait for k8s-apps to be running ...
	I1129 09:02:21.957498  494126 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:02:21.957549  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:02:21.971582  494126 system_svc.go:56] duration metric: took 14.071974ms WaitForService to wait for kubelet
	I1129 09:02:21.971613  494126 kubeadm.go:587] duration metric: took 15.917009838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:02:21.971632  494126 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:02:21.974426  494126 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:02:21.974453  494126 node_conditions.go:123] node cpu capacity is 8
	I1129 09:02:21.974471  494126 node_conditions.go:105] duration metric: took 2.83418ms to run NodePressure ...
	I1129 09:02:21.974485  494126 start.go:242] waiting for startup goroutines ...
	I1129 09:02:21.974492  494126 start.go:247] waiting for cluster config update ...
	I1129 09:02:21.974502  494126 start.go:256] writing updated cluster config ...
	I1129 09:02:21.974780  494126 ssh_runner.go:195] Run: rm -f paused
	I1129 09:02:21.978967  494126 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:21.982434  494126 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nsh8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.986370  494126 pod_ready.go:94] pod "coredns-66bc5c9577-nsh8w" is "Ready"
	I1129 09:02:21.986395  494126 pod_ready.go:86] duration metric: took 3.939701ms for pod "coredns-66bc5c9577-nsh8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.988365  494126 pod_ready.go:83] waiting for pod "etcd-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.991850  494126 pod_ready.go:94] pod "etcd-no-preload-924441" is "Ready"
	I1129 09:02:21.991874  494126 pod_ready.go:86] duration metric: took 3.486388ms for pod "etcd-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.993587  494126 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.997072  494126 pod_ready.go:94] pod "kube-apiserver-no-preload-924441" is "Ready"
	I1129 09:02:21.997092  494126 pod_ready.go:86] duration metric: took 3.484304ms for pod "kube-apiserver-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.998698  494126 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:22.382918  494126 pod_ready.go:94] pod "kube-controller-manager-no-preload-924441" is "Ready"
	I1129 09:02:22.382948  494126 pod_ready.go:86] duration metric: took 384.232783ms for pod "kube-controller-manager-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:22.583125  494126 pod_ready.go:83] waiting for pod "kube-proxy-96fcg" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:22.982608  494126 pod_ready.go:94] pod "kube-proxy-96fcg" is "Ready"
	I1129 09:02:22.982639  494126 pod_ready.go:86] duration metric: took 399.48383ms for pod "kube-proxy-96fcg" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:23.184031  494126 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:23.583027  494126 pod_ready.go:94] pod "kube-scheduler-no-preload-924441" is "Ready"
	I1129 09:02:23.583058  494126 pod_ready.go:86] duration metric: took 399.00134ms for pod "kube-scheduler-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:23.583071  494126 pod_ready.go:40] duration metric: took 1.604064431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:23.632822  494126 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:02:23.634677  494126 out.go:179] * Done! kubectl is now configured to use "no-preload-924441" cluster and "default" namespace by default
	I1129 09:02:20.607959  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:20.608406  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:20.608469  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:20.608531  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:20.639116  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:20.639148  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:20.639155  460401 cri.go:89] found id: ""
	I1129 09:02:20.639168  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:20.639240  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.644749  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.649347  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:20.649411  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:20.677383  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:20.677404  460401 cri.go:89] found id: ""
	I1129 09:02:20.677413  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:20.677466  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.682625  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:20.682708  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:20.711021  460401 cri.go:89] found id: ""
	I1129 09:02:20.711050  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.711060  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:20.711070  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:20.711138  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:20.745598  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:20.745626  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:20.745632  460401 cri.go:89] found id: ""
	I1129 09:02:20.745643  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:20.745716  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.751838  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.757804  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:20.757881  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:20.793640  460401 cri.go:89] found id: ""
	I1129 09:02:20.793671  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.793683  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:20.793691  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:20.793792  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:20.830071  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:20.830099  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:20.830104  460401 cri.go:89] found id: ""
	I1129 09:02:20.830114  460401 logs.go:282] 2 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:20.830179  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.837576  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.843146  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:20.843225  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:20.883480  460401 cri.go:89] found id: ""
	I1129 09:02:20.883525  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.883536  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:20.883543  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:20.883598  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:20.923499  460401 cri.go:89] found id: ""
	I1129 09:02:20.923532  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.923543  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:20.923557  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:20.923574  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:20.961675  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:20.961713  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:20.996489  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:20.996524  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:21.046535  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:21.046596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:21.131239  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:21.131286  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:21.192537  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:21.192557  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:21.192573  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:21.227894  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:21.227932  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:21.262592  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:21.262632  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:21.298034  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:21.298076  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:21.313593  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:21.313626  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:21.355840  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:21.355878  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:21.409528  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:21.409570  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:23.946261  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:23.946794  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:23.946872  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:23.946940  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:23.978496  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:23.978521  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:23.978525  460401 cri.go:89] found id: ""
	I1129 09:02:23.978533  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:23.978585  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:23.983820  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:23.988502  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:23.988563  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:24.017479  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:24.017505  460401 cri.go:89] found id: ""
	I1129 09:02:24.017516  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:24.017581  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.022978  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:24.023049  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:24.054017  460401 cri.go:89] found id: ""
	I1129 09:02:24.054042  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.054049  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:24.054055  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:24.054104  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:24.083682  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:24.083704  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:24.083710  460401 cri.go:89] found id: ""
	I1129 09:02:24.083720  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:24.083797  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.089191  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.094144  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:24.094223  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:24.123931  460401 cri.go:89] found id: ""
	I1129 09:02:24.123956  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.123964  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:24.123972  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:24.124032  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:24.158678  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:24.158704  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:24.158710  460401 cri.go:89] found id: ""
	I1129 09:02:24.158721  460401 logs.go:282] 2 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:24.158824  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.164380  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.170117  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:24.170196  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:24.202016  460401 cri.go:89] found id: ""
	I1129 09:02:24.202057  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.202066  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:24.202072  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:24.202123  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:24.235359  460401 cri.go:89] found id: ""
	I1129 09:02:24.235388  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.235399  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:24.235412  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:24.235427  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:24.327121  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:24.327167  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:24.380608  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:24.380651  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:24.411895  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:24.411923  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:24.450543  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:24.450575  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:24.500105  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:24.500146  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	64dcae39f0e63       56cc512116c8f       7 seconds ago       Running             busybox                   0                   c3b03930e2672       busybox                                          default
	84eb7f692c990       ead0a4a53df89       13 seconds ago      Running             coredns                   0                   46a4885d817e8       coredns-5dd5756b68-phw28                         kube-system
	c2b64aca34f8b       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   f0e9f57ece0e7       storage-provisioner                              kube-system
	c556471fd7ebd       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   c9cb87dbe2bae       kindnet-k4n9l                                    kube-system
	c3eb6059b5593       ea1030da44aa1       27 seconds ago      Running             kube-proxy                0                   d9056ddc2e968       kube-proxy-4rfb4                                 kube-system
	ec1e8ae808249       f6f496300a2ae       45 seconds ago      Running             kube-scheduler            0                   7caf413f5769e       kube-scheduler-old-k8s-version-295154            kube-system
	b3d9ef849b109       4be79c38a4bab       45 seconds ago      Running             kube-controller-manager   0                   f845d639a6e89       kube-controller-manager-old-k8s-version-295154   kube-system
	e534f6de34cb5       73deb9a3f7025       45 seconds ago      Running             etcd                      0                   83b4224fe982d       etcd-old-k8s-version-295154                      kube-system
	c912b0431f5b9       bb5e0dde9054c       45 seconds ago      Running             kube-apiserver            0                   c5ef1020ba416       kube-apiserver-old-k8s-version-295154            kube-system
	
	
	==> containerd <==
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.171284629Z" level=info msg="CreateContainer within sandbox \"f0e9f57ece0e7298ea8ff52e824c152b0a198734fa271e11f9da85ab94980def\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"c2b64aca34f8b72337fd1dd9bda969ab607f739b3b5bd64a9962706bb51f1368\""
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.171952045Z" level=info msg="StartContainer for \"c2b64aca34f8b72337fd1dd9bda969ab607f739b3b5bd64a9962706bb51f1368\""
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.173213037Z" level=info msg="connecting to shim c2b64aca34f8b72337fd1dd9bda969ab607f739b3b5bd64a9962706bb51f1368" address="unix:///run/containerd/s/dc122ba824fb2ecb94628ad2391429e4d2b98c17ac396814c4a25b4d93b141fe" protocol=ttrpc version=3
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.175196491Z" level=info msg="CreateContainer within sandbox \"46a4885d817e84fab45e9ad70e7c335ccc0f307e19f484641f3f563e19a3b305\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84eb7f692c99059489020b59b47c169ecc9d4286a2bf7a532dae7f5d13e68795\""
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.175823701Z" level=info msg="StartContainer for \"84eb7f692c99059489020b59b47c169ecc9d4286a2bf7a532dae7f5d13e68795\""
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.176634429Z" level=info msg="connecting to shim 84eb7f692c99059489020b59b47c169ecc9d4286a2bf7a532dae7f5d13e68795" address="unix:///run/containerd/s/950489f09bce35a172bb4082bad530c176c650052c0ffe9dab18daf70ee3f021" protocol=ttrpc version=3
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.230846483Z" level=info msg="StartContainer for \"c2b64aca34f8b72337fd1dd9bda969ab607f739b3b5bd64a9962706bb51f1368\" returns successfully"
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.234243145Z" level=info msg="StartContainer for \"84eb7f692c99059489020b59b47c169ecc9d4286a2bf7a532dae7f5d13e68795\" returns successfully"
	Nov 29 09:02:16 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:16.439586027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:54baf2f4-8de5-4f66-92ac-f5315174d940,Namespace:default,Attempt:0,}"
	Nov 29 09:02:16 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:16.482219935Z" level=info msg="connecting to shim c3b03930e26728c610c785b965715fd3b553dfa8fa71b6e35bcc2370b534d413" address="unix:///run/containerd/s/705109ebb456d589bcc59459487d5f036c6a54c53bc3e7a7b9f9e1b41d8f56cc" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:02:16 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:16.554186463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:54baf2f4-8de5-4f66-92ac-f5315174d940,Namespace:default,Attempt:0,} returns sandbox id \"c3b03930e26728c610c785b965715fd3b553dfa8fa71b6e35bcc2370b534d413\""
	Nov 29 09:02:16 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:16.556162494Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.188092236Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.188755127Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.190108938Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.192089044Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.192508223Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.636298875s"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.192553605Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.194479178Z" level=info msg="CreateContainer within sandbox \"c3b03930e26728c610c785b965715fd3b553dfa8fa71b6e35bcc2370b534d413\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.201487714Z" level=info msg="Container 64dcae39f0e638d4b6c6e188a3cb9da7d32231fa3ff9ad25ba54b2c00601f705: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.207643963Z" level=info msg="CreateContainer within sandbox \"c3b03930e26728c610c785b965715fd3b553dfa8fa71b6e35bcc2370b534d413\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"64dcae39f0e638d4b6c6e188a3cb9da7d32231fa3ff9ad25ba54b2c00601f705\""
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.208357251Z" level=info msg="StartContainer for \"64dcae39f0e638d4b6c6e188a3cb9da7d32231fa3ff9ad25ba54b2c00601f705\""
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.209198742Z" level=info msg="connecting to shim 64dcae39f0e638d4b6c6e188a3cb9da7d32231fa3ff9ad25ba54b2c00601f705" address="unix:///run/containerd/s/705109ebb456d589bcc59459487d5f036c6a54c53bc3e7a7b9f9e1b41d8f56cc" protocol=ttrpc version=3
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.268677673Z" level=info msg="StartContainer for \"64dcae39f0e638d4b6c6e188a3cb9da7d32231fa3ff9ad25ba54b2c00601f705\" returns successfully"
	Nov 29 09:02:25 old-k8s-version-295154 containerd[663]: E1129 09:02:25.213853     663 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [84eb7f692c99059489020b59b47c169ecc9d4286a2bf7a532dae7f5d13e68795] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46306 - 2219 "HINFO IN 2134159150006616805.6033665223682648056. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036424572s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-295154
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-295154
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=old-k8s-version-295154
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_01_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:01:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-295154
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:02:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:02:16 +0000   Sat, 29 Nov 2025 09:01:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:02:16 +0000   Sat, 29 Nov 2025 09:01:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:02:16 +0000   Sat, 29 Nov 2025 09:01:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:02:16 +0000   Sat, 29 Nov 2025 09:02:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-295154
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                22b437c1-66e6-4b41-85ab-28edf17772d8
	  Boot ID:                    b81dce2f-73d5-4349-b473-aa1210058cb8
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-phw28                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-295154                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-k4n9l                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-295154             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-295154    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-4rfb4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-295154             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  40s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-295154 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-295154 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-295154 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node old-k8s-version-295154 event: Registered Node old-k8s-version-295154 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-295154 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 07:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001881] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084003] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.378167] i8042: Warning: Keylock active
	[  +0.012106] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.460417] block sda: the capability attribute has been deprecated.
	[  +0.079627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021012] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.285522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [e534f6de34cb59a48842df5c90bc3db11dfa608b2f5ab4df9fd455d5a0bc5f86] <==
	{"level":"info","ts":"2025-11-29T09:01:40.832264Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-29T09:01:40.833809Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-29T09:01:40.834831Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-29T09:01:40.835134Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-29T09:01:40.835187Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-29T09:01:40.835365Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T09:01:40.835454Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T09:01:41.123873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-29T09:01:41.123935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-29T09:01:41.123975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-29T09:01:41.123993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-29T09:01:41.124004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-29T09:01:41.124048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-29T09:01:41.124063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-29T09:01:41.125302Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:01:41.125326Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:01:41.125372Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:01:41.125276Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-295154 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-29T09:01:41.126456Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:01:41.126541Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:01:41.126567Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:01:41.126779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-29T09:01:41.127083Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-29T09:01:41.127112Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-29T09:01:41.126728Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:02:26 up  1:44,  0 user,  load average: 2.70, 2.84, 12.45
	Linux old-k8s-version-295154 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c556471fd7ebd161ba2d7b8d6bae271ee70e193598e07a1f28e7e4edb21ff0ac] <==
	I1129 09:02:02.479657       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:02:02.479993       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:02:02.480115       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:02:02.480129       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:02:02.480148       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:02:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:02:02.682312       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:02:02.682392       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:02:02.682406       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:02:02.682562       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:02:03.155518       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:02:03.155556       1 metrics.go:72] Registering metrics
	I1129 09:02:03.155642       1 controller.go:711] "Syncing nftables rules"
	I1129 09:02:12.691133       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:02:12.691191       1 main.go:301] handling current node
	I1129 09:02:22.684230       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:02:22.684264       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c912b0431f5b96b6ae8d3df9e39af5a731f5b6f4a3128fbae403427258cd4010] <==
	I1129 09:01:42.628432       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1129 09:01:42.628473       1 aggregator.go:166] initial CRD sync complete...
	I1129 09:01:42.628487       1 autoregister_controller.go:141] Starting autoregister controller
	I1129 09:01:42.628498       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 09:01:42.628507       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:01:42.630276       1 controller.go:624] quota admission added evaluator for: namespaces
	I1129 09:01:42.631842       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1129 09:01:42.632653       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:01:42.633160       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1129 09:01:42.675946       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:01:43.534299       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:01:43.538893       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:01:43.538914       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:01:44.048669       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:01:44.089332       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:01:44.139778       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:01:44.147964       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1129 09:01:44.149152       1 controller.go:624] quota admission added evaluator for: endpoints
	I1129 09:01:44.153475       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:01:44.583851       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1129 09:01:45.899683       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1129 09:01:45.911834       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:01:45.923913       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1129 09:01:58.190396       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1129 09:01:58.345309       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b3d9ef849b10991879886d480043efb13728841f71afc04d4c57f7bef3ceffc8] <==
	I1129 09:01:57.601489       1 shared_informer.go:318] Caches are synced for HPA
	I1129 09:01:57.641964       1 shared_informer.go:318] Caches are synced for resource quota
	I1129 09:01:57.693466       1 shared_informer.go:318] Caches are synced for resource quota
	I1129 09:01:58.013319       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:01:58.081463       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:01:58.081502       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1129 09:01:58.201293       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k4n9l"
	I1129 09:01:58.203642       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4rfb4"
	I1129 09:01:58.351467       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1129 09:01:58.446469       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rjd8l"
	I1129 09:01:58.457821       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-phw28"
	I1129 09:01:58.472248       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.660505ms"
	I1129 09:01:58.490138       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.818584ms"
	I1129 09:01:58.490294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.203µs"
	I1129 09:01:58.749707       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1129 09:01:58.764048       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-rjd8l"
	I1129 09:01:58.771830       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.493664ms"
	I1129 09:01:58.778438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.545401ms"
	I1129 09:01:58.778711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.414µs"
	I1129 09:02:12.741856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="137.043µs"
	I1129 09:02:12.755154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="122.723µs"
	I1129 09:02:14.089302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.286µs"
	I1129 09:02:14.110178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.287126ms"
	I1129 09:02:14.110300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.729µs"
	I1129 09:02:17.447692       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [c3eb6059b5593e42d8e9ac6b43ac8b87e944eac5747f993c6bbca2acc16f180b] <==
	I1129 09:01:58.837203       1 server_others.go:69] "Using iptables proxy"
	I1129 09:01:58.847060       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1129 09:01:58.872286       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:01:58.874956       1 server_others.go:152] "Using iptables Proxier"
	I1129 09:01:58.875022       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1129 09:01:58.875038       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1129 09:01:58.875085       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1129 09:01:58.875423       1 server.go:846] "Version info" version="v1.28.0"
	I1129 09:01:58.875446       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:01:58.877361       1 config.go:188] "Starting service config controller"
	I1129 09:01:58.877426       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1129 09:01:58.878055       1 config.go:97] "Starting endpoint slice config controller"
	I1129 09:01:58.878080       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1129 09:01:58.878567       1 config.go:315] "Starting node config controller"
	I1129 09:01:58.878812       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1129 09:01:58.977719       1 shared_informer.go:318] Caches are synced for service config
	I1129 09:01:58.978897       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1129 09:01:58.979002       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ec1e8ae808249468b5a57a4c1aa02a0700a8af9e46e3b394b96fda393ef3531b] <==
	E1129 09:01:42.591266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1129 09:01:42.591281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1129 09:01:43.438322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1129 09:01:43.438354       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1129 09:01:43.459244       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1129 09:01:43.459274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1129 09:01:43.466076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1129 09:01:43.466111       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1129 09:01:43.467104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1129 09:01:43.467131       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1129 09:01:43.496506       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1129 09:01:43.496554       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1129 09:01:43.745308       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1129 09:01:43.745358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1129 09:01:43.782232       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1129 09:01:43.782279       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1129 09:01:43.784711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1129 09:01:43.784785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1129 09:01:43.822287       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1129 09:01:43.822413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1129 09:01:43.831935       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1129 09:01:43.831979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1129 09:01:44.009190       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1129 09:01:44.009227       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1129 09:01:46.586725       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 29 09:01:57 old-k8s-version-295154 kubelet[1505]: I1129 09:01:57.557701    1505 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.211770    1505 topology_manager.go:215] "Topology Admit Handler" podUID="74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8" podNamespace="kube-system" podName="kindnet-k4n9l"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.211977    1505 topology_manager.go:215] "Topology Admit Handler" podUID="05ef67c3-0d6e-453d-a0e5-81c649c3e033" podNamespace="kube-system" podName="kube-proxy-4rfb4"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245664    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvjhl\" (UniqueName: \"kubernetes.io/projected/74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8-kube-api-access-kvjhl\") pod \"kindnet-k4n9l\" (UID: \"74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8\") " pod="kube-system/kindnet-k4n9l"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245757    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8-cni-cfg\") pod \"kindnet-k4n9l\" (UID: \"74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8\") " pod="kube-system/kindnet-k4n9l"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245804    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8-lib-modules\") pod \"kindnet-k4n9l\" (UID: \"74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8\") " pod="kube-system/kindnet-k4n9l"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245867    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05ef67c3-0d6e-453d-a0e5-81c649c3e033-xtables-lock\") pod \"kube-proxy-4rfb4\" (UID: \"05ef67c3-0d6e-453d-a0e5-81c649c3e033\") " pod="kube-system/kube-proxy-4rfb4"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245918    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05ef67c3-0d6e-453d-a0e5-81c649c3e033-lib-modules\") pod \"kube-proxy-4rfb4\" (UID: \"05ef67c3-0d6e-453d-a0e5-81c649c3e033\") " pod="kube-system/kube-proxy-4rfb4"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245964    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/05ef67c3-0d6e-453d-a0e5-81c649c3e033-kube-proxy\") pod \"kube-proxy-4rfb4\" (UID: \"05ef67c3-0d6e-453d-a0e5-81c649c3e033\") " pod="kube-system/kube-proxy-4rfb4"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245999    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8-xtables-lock\") pod \"kindnet-k4n9l\" (UID: \"74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8\") " pod="kube-system/kindnet-k4n9l"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.246031    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6tpd\" (UniqueName: \"kubernetes.io/projected/05ef67c3-0d6e-453d-a0e5-81c649c3e033-kube-api-access-l6tpd\") pod \"kube-proxy-4rfb4\" (UID: \"05ef67c3-0d6e-453d-a0e5-81c649c3e033\") " pod="kube-system/kube-proxy-4rfb4"
	Nov 29 09:01:59 old-k8s-version-295154 kubelet[1505]: I1129 09:01:59.051481    1505 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4rfb4" podStartSLOduration=1.051403893 podCreationTimestamp="2025-11-29 09:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:01:59.051034434 +0000 UTC m=+13.185091147" watchObservedRunningTime="2025-11-29 09:01:59.051403893 +0000 UTC m=+13.185460607"
	Nov 29 09:02:03 old-k8s-version-295154 kubelet[1505]: I1129 09:02:03.075069    1505 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-k4n9l" podStartSLOduration=1.8021440370000001 podCreationTimestamp="2025-11-29 09:01:58 +0000 UTC" firstStartedPulling="2025-11-29 09:01:58.884230342 +0000 UTC m=+13.018287046" lastFinishedPulling="2025-11-29 09:02:02.157002868 +0000 UTC m=+16.291059564" observedRunningTime="2025-11-29 09:02:03.074620988 +0000 UTC m=+17.208677701" watchObservedRunningTime="2025-11-29 09:02:03.074916555 +0000 UTC m=+17.208973271"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.718189    1505 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.741770    1505 topology_manager.go:215] "Topology Admit Handler" podUID="7fc2b8dd-43dd-43df-8887-9ffa6de36fb4" podNamespace="kube-system" podName="coredns-5dd5756b68-phw28"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.742156    1505 topology_manager.go:215] "Topology Admit Handler" podUID="359871fd-a77c-430a-87c1-b313992718e2" podNamespace="kube-system" podName="storage-provisioner"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.838446    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sztkn\" (UniqueName: \"kubernetes.io/projected/7fc2b8dd-43dd-43df-8887-9ffa6de36fb4-kube-api-access-sztkn\") pod \"coredns-5dd5756b68-phw28\" (UID: \"7fc2b8dd-43dd-43df-8887-9ffa6de36fb4\") " pod="kube-system/coredns-5dd5756b68-phw28"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.838527    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghrm\" (UniqueName: \"kubernetes.io/projected/359871fd-a77c-430a-87c1-b313992718e2-kube-api-access-2ghrm\") pod \"storage-provisioner\" (UID: \"359871fd-a77c-430a-87c1-b313992718e2\") " pod="kube-system/storage-provisioner"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.838708    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fc2b8dd-43dd-43df-8887-9ffa6de36fb4-config-volume\") pod \"coredns-5dd5756b68-phw28\" (UID: \"7fc2b8dd-43dd-43df-8887-9ffa6de36fb4\") " pod="kube-system/coredns-5dd5756b68-phw28"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.838811    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/359871fd-a77c-430a-87c1-b313992718e2-tmp\") pod \"storage-provisioner\" (UID: \"359871fd-a77c-430a-87c1-b313992718e2\") " pod="kube-system/storage-provisioner"
	Nov 29 09:02:14 old-k8s-version-295154 kubelet[1505]: I1129 09:02:14.089000    1505 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-phw28" podStartSLOduration=16.088943107 podCreationTimestamp="2025-11-29 09:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:14.088869179 +0000 UTC m=+28.222925894" watchObservedRunningTime="2025-11-29 09:02:14.088943107 +0000 UTC m=+28.222999821"
	Nov 29 09:02:14 old-k8s-version-295154 kubelet[1505]: I1129 09:02:14.111723    1505 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.111665904 podCreationTimestamp="2025-11-29 09:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:14.111613929 +0000 UTC m=+28.245670654" watchObservedRunningTime="2025-11-29 09:02:14.111665904 +0000 UTC m=+28.245722610"
	Nov 29 09:02:16 old-k8s-version-295154 kubelet[1505]: I1129 09:02:16.130277    1505 topology_manager.go:215] "Topology Admit Handler" podUID="54baf2f4-8de5-4f66-92ac-f5315174d940" podNamespace="default" podName="busybox"
	Nov 29 09:02:16 old-k8s-version-295154 kubelet[1505]: I1129 09:02:16.160532    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj46k\" (UniqueName: \"kubernetes.io/projected/54baf2f4-8de5-4f66-92ac-f5315174d940-kube-api-access-wj46k\") pod \"busybox\" (UID: \"54baf2f4-8de5-4f66-92ac-f5315174d940\") " pod="default/busybox"
	Nov 29 09:02:20 old-k8s-version-295154 kubelet[1505]: I1129 09:02:20.102644    1505 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.465512975 podCreationTimestamp="2025-11-29 09:02:16 +0000 UTC" firstStartedPulling="2025-11-29 09:02:16.555803596 +0000 UTC m=+30.689860305" lastFinishedPulling="2025-11-29 09:02:19.192874383 +0000 UTC m=+33.326931083" observedRunningTime="2025-11-29 09:02:20.102453338 +0000 UTC m=+34.236510058" watchObservedRunningTime="2025-11-29 09:02:20.102583753 +0000 UTC m=+34.236640469"
	
	
	==> storage-provisioner [c2b64aca34f8b72337fd1dd9bda969ab607f739b3b5bd64a9962706bb51f1368] <==
	I1129 09:02:13.242146       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:02:13.250320       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:02:13.250375       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1129 09:02:13.260646       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:02:13.260835       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3d38b917-49d9-4ce8-b6d4-33e78e4354a6", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-295154_6170b45d-8612-41e5-bb3d-e5fe156c196d became leader
	I1129 09:02:13.260885       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-295154_6170b45d-8612-41e5-bb3d-e5fe156c196d!
	I1129 09:02:13.362157       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-295154_6170b45d-8612-41e5-bb3d-e5fe156c196d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-295154 -n old-k8s-version-295154
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-295154 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-295154
helpers_test.go:243: (dbg) docker inspect old-k8s-version-295154:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e",
	        "Created": "2025-11-29T09:01:32.670265754Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 494787,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:01:32.709136408Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e/hostname",
	        "HostsPath": "/var/lib/docker/containers/1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e/hosts",
	        "LogPath": "/var/lib/docker/containers/1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e/1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e-json.log",
	        "Name": "/old-k8s-version-295154",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-295154:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-295154",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1d2dc93defe08823e969abc1083166e5b987c49003d867c47f6dab538c73042e",
	                "LowerDir": "/var/lib/docker/overlay2/10e010eea53c4090a92173793351457113c92b95e4addfb0007c310be02782d4-init/diff:/var/lib/docker/overlay2/eb180691bce18b8d981b2d61ed0962851c615364ed77c18ff66d559424569005/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10e010eea53c4090a92173793351457113c92b95e4addfb0007c310be02782d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10e010eea53c4090a92173793351457113c92b95e4addfb0007c310be02782d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10e010eea53c4090a92173793351457113c92b95e4addfb0007c310be02782d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-295154",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-295154/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-295154",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-295154",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-295154",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d61dde634f57a1405987eb1bcb1468d94550e880fe30f55b1f686d12c8c280ee",
	            "SandboxKey": "/var/run/docker/netns/d61dde634f57",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-295154": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "aea341d97cf5d4f6668e24ade3efa38cebbca9060f995994226a6ded161b076c",
	                    "EndpointID": "7f306b5e076751e147ce07bdf687dd5284be41e6bffcdf4542e80d7a90deb9e2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "e6:d5:92:ca:f6:04",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-295154",
	                        "1d2dc93defe0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-295154 -n old-k8s-version-295154
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-295154 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-295154 logs -n 25: (1.135224056s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-770004 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo containerd config dump                                                                                                                                                                                                        │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo crio config                                                                                                                                                                                                                   │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ delete  │ -p cilium-770004                                                                                                                                                                                                                                    │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │ 29 Nov 25 09:00 UTC │
	│ start   │ -p force-systemd-env-693869 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-693869 │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │ 29 Nov 25 09:01 UTC │
	│ start   │ -p pause-563162 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │ 29 Nov 25 09:01 UTC │
	│ ssh     │ force-systemd-env-693869 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-693869 │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p force-systemd-env-693869                                                                                                                                                                                                                         │ force-systemd-env-693869 │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ start   │ -p cert-options-536258 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ pause   │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ unpause │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ pause   │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ ssh     │ cert-options-536258 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ ssh     │ -p cert-options-536258 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p cert-options-536258                                                                                                                                                                                                                              │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p pause-563162                                                                                                                                                                                                                                     │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ start   │ -p old-k8s-version-295154 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-295154   │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:02 UTC │
	│ start   │ -p no-preload-924441 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-924441        │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:02 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:01:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:01:26.371812  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:26.372231  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:26.372304  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:26.372374  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:26.406988  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:26.407016  460401 cri.go:89] found id: "40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	I1129 09:01:26.407022  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:26.407027  460401 cri.go:89] found id: ""
	I1129 09:01:26.407038  460401 logs.go:282] 3 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:26.407111  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.413707  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.419492  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.424920  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:26.424999  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:26.456369  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:26.456395  460401 cri.go:89] found id: ""
	I1129 09:01:26.456406  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:26.456466  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.462064  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:26.462133  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:26.492837  460401 cri.go:89] found id: ""
	I1129 09:01:26.492868  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.492879  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:26.492887  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:26.492955  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:26.521715  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:26.521747  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:26.521754  460401 cri.go:89] found id: ""
	I1129 09:01:26.521763  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:26.521821  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.526872  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.531295  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:26.531353  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:26.558218  460401 cri.go:89] found id: ""
	I1129 09:01:26.558248  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.558257  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:26.558264  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:26.558313  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:26.587221  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:26.587246  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:26.587253  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:26.587258  460401 cri.go:89] found id: ""
	I1129 09:01:26.587268  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:26.587328  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.591954  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.596055  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.600163  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:26.600219  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:26.628586  460401 cri.go:89] found id: ""
	I1129 09:01:26.628613  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.628624  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:26.628633  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:26.628690  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:26.657553  460401 cri.go:89] found id: ""
	I1129 09:01:26.657581  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.657591  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:26.657603  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:26.657622  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:26.721559  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:26.721584  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:26.721601  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:26.756136  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:26.756165  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:26.787789  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:26.787827  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:26.838908  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:26.838943  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:26.875689  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:26.875723  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:26.946907  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:26.946941  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:26.982883  460401 logs.go:123] Gathering logs for kube-apiserver [40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac] ...
	I1129 09:01:26.982919  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	W1129 09:01:27.012923  460401 logs.go:130] failed kube-apiserver [40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac": Process exited with status 1
	stdout:
	
	stderr:
	E1129 09:01:27.010611    2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found" containerID="40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	time="2025-11-29T09:01:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found"
	 output: 
	** stderr ** 
	E1129 09:01:27.010611    2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found" containerID="40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	time="2025-11-29T09:01:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found"
	
	** /stderr **
	I1129 09:01:27.012941  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:27.012953  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:27.051493  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:27.051526  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:27.089722  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:27.089755  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:27.138471  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:27.138504  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:27.172932  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:27.172962  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:27.207844  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:27.207878  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:29.500031  494126 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:01:29.500142  494126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:01:29.500153  494126 out.go:374] Setting ErrFile to fd 2...
	I1129 09:01:29.500159  494126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:01:29.500372  494126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 09:01:29.500882  494126 out.go:368] Setting JSON to false
	I1129 09:01:29.501996  494126 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6233,"bootTime":1764400656,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:01:29.502070  494126 start.go:143] virtualization: kvm guest
	I1129 09:01:29.506976  494126 out.go:179] * [no-preload-924441] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:01:29.508162  494126 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:01:29.508182  494126 notify.go:221] Checking for updates...
	I1129 09:01:29.510318  494126 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:01:29.511334  494126 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:01:29.516252  494126 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 09:01:29.517321  494126 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:01:29.518374  494126 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:01:29.519877  494126 config.go:182] Loaded profile config "cert-expiration-368536": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:01:29.519989  494126 config.go:182] Loaded profile config "kubernetes-upgrade-806701": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:01:29.520095  494126 config.go:182] Loaded profile config "old-k8s-version-295154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:01:29.520225  494126 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:01:29.546023  494126 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:01:29.546141  494126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:01:29.607775  494126 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:81 SystemTime:2025-11-29 09:01:29.596891851 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:01:29.607908  494126 docker.go:319] overlay module found
	I1129 09:01:29.610288  494126 out.go:179] * Using the docker driver based on user configuration
	I1129 09:01:29.611200  494126 start.go:309] selected driver: docker
	I1129 09:01:29.611220  494126 start.go:927] validating driver "docker" against <nil>
	I1129 09:01:29.611231  494126 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:01:29.611850  494126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:01:29.673266  494126 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:81 SystemTime:2025-11-29 09:01:29.662655452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:01:29.673484  494126 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:01:29.673822  494126 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:01:29.675454  494126 out.go:179] * Using Docker driver with root privileges
	I1129 09:01:29.679127  494126 cni.go:84] Creating CNI manager for ""
	I1129 09:01:29.679243  494126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:29.679264  494126 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:01:29.679351  494126 start.go:353] cluster config:
	{Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:01:29.680591  494126 out.go:179] * Starting "no-preload-924441" primary control-plane node in "no-preload-924441" cluster
	I1129 09:01:29.681517  494126 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:01:29.682533  494126 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:01:29.683845  494126 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:01:29.683975  494126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/config.json ...
	I1129 09:01:29.683971  494126 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:01:29.684042  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/config.json: {Name:mk4df9140f26fdbfe5b2addb71b44607d26b26a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:29.684181  494126 cache.go:107] acquiring lock: {Name:mka90f7eac55a6e5d6d9651fc108f327509b562f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684233  494126 cache.go:107] acquiring lock: {Name:mk2c250a4202b546a18f0cc7664314439a4ec834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684259  494126 cache.go:107] acquiring lock: {Name:mk976aaa4e01b0c9e83cc6925b8c3c72804bfa25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684288  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 09:01:29.684299  494126 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 144.373µs
	I1129 09:01:29.684315  494126 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 09:01:29.684321  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 09:01:29.684322  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 09:01:29.684332  494126 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 80.37µs
	I1129 09:01:29.684333  494126 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 119.913µs
	I1129 09:01:29.684341  494126 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 09:01:29.684344  494126 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 09:01:29.684332  494126 cache.go:107] acquiring lock: {Name:mkff44f5b6b961ddaa9acc3e74cf0480b0d2f776 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684358  494126 cache.go:107] acquiring lock: {Name:mk6080f4393a19fb5c4d6f436dce1a2bb1688f86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684378  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 09:01:29.684387  494126 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 58.113µs
	I1129 09:01:29.684395  494126 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 09:01:29.684399  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1129 09:01:29.684282  494126 cache.go:107] acquiring lock: {Name:mkb8e7a67c98a0b8caa208116d415323f5ca7ccc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684410  494126 cache.go:107] acquiring lock: {Name:mk47ee24ca074cb6cc1a641d737215686b099dc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684472  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 09:01:29.684482  494126 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 217.393µs
	I1129 09:01:29.684492  494126 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 09:01:29.684416  494126 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 61.464µs
	I1129 09:01:29.684504  494126 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 09:01:29.684517  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 09:01:29.684533  494126 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 171.692µs
	I1129 09:01:29.684552  494126 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 09:01:29.684643  494126 cache.go:107] acquiring lock: {Name:mk912246de843459c104f342794e23ecb1fc7a75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684790  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 09:01:29.684806  494126 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 226.111µs
	I1129 09:01:29.684824  494126 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 09:01:29.684840  494126 cache.go:87] Successfully saved all images to host disk.
	I1129 09:01:29.706829  494126 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:01:29.706854  494126 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:01:29.706878  494126 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:01:29.706918  494126 start.go:360] acquireMachinesLock for no-preload-924441: {Name:mkf9f3b6b30f178cf9b9d50a2dabce8e2c5d48f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.707056  494126 start.go:364] duration metric: took 99.455µs to acquireMachinesLock for "no-preload-924441"
	I1129 09:01:29.707090  494126 start.go:93] Provisioning new machine with config: &{Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:01:29.707206  494126 start.go:125] createHost starting for "" (driver="docker")
	I1129 09:01:28.461537  493486 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:01:28.461867  493486 start.go:159] libmachine.API.Create for "old-k8s-version-295154" (driver="docker")
	I1129 09:01:28.461917  493486 client.go:173] LocalClient.Create starting
	I1129 09:01:28.462009  493486 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem
	I1129 09:01:28.462065  493486 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:28.462089  493486 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:28.462160  493486 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem
	I1129 09:01:28.462186  493486 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:28.462205  493486 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:28.462679  493486 cli_runner.go:164] Run: docker network inspect old-k8s-version-295154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:01:28.481658  493486 cli_runner.go:211] docker network inspect old-k8s-version-295154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:01:28.481745  493486 network_create.go:284] running [docker network inspect old-k8s-version-295154] to gather additional debugging logs...
	I1129 09:01:28.481770  493486 cli_runner.go:164] Run: docker network inspect old-k8s-version-295154
	W1129 09:01:28.500619  493486 cli_runner.go:211] docker network inspect old-k8s-version-295154 returned with exit code 1
	I1129 09:01:28.500661  493486 network_create.go:287] error running [docker network inspect old-k8s-version-295154]: docker network inspect old-k8s-version-295154: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-295154 not found
	I1129 09:01:28.500677  493486 network_create.go:289] output of [docker network inspect old-k8s-version-295154]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-295154 not found
	
	** /stderr **
	I1129 09:01:28.500849  493486 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:28.518426  493486 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f69c672bf913 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:40:f4:ed:4f:ab} reservation:<nil>}
	I1129 09:01:28.519384  493486 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-96d20aff5877 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:01:e2:a3:b8:33} reservation:<nil>}
	I1129 09:01:28.520407  493486 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7906c56f869 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:29:75:e3:e0:7f} reservation:<nil>}
	I1129 09:01:28.521974  493486 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f90700}
	I1129 09:01:28.522028  493486 network_create.go:124] attempt to create docker network old-k8s-version-295154 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 09:01:28.522109  493486 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-295154 old-k8s-version-295154
	I1129 09:01:28.575478  493486 network_create.go:108] docker network old-k8s-version-295154 192.168.76.0/24 created
	I1129 09:01:28.575522  493486 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-295154" container
	I1129 09:01:28.575603  493486 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:01:28.593666  493486 cli_runner.go:164] Run: docker volume create old-k8s-version-295154 --label name.minikube.sigs.k8s.io=old-k8s-version-295154 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:01:28.612389  493486 oci.go:103] Successfully created a docker volume old-k8s-version-295154
	I1129 09:01:28.612501  493486 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-295154-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-295154 --entrypoint /usr/bin/test -v old-k8s-version-295154:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:01:29.238109  493486 oci.go:107] Successfully prepared a docker volume old-k8s-version-295154
	I1129 09:01:29.238162  493486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 09:01:29.238176  493486 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:01:29.238241  493486 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-295154:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 09:01:32.586626  493486 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-295154:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.348341473s)
	I1129 09:01:32.586660  493486 kic.go:203] duration metric: took 3.348481997s to extract preloaded images to volume ...
	W1129 09:01:32.586761  493486 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:01:32.586805  493486 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:01:32.586861  493486 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:01:32.650922  493486 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-295154 --name old-k8s-version-295154 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-295154 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-295154 --network old-k8s-version-295154 --ip 192.168.76.2 --volume old-k8s-version-295154:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:01:32.982372  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Running}}
	I1129 09:01:33.001073  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:33.021021  493486 cli_runner.go:164] Run: docker exec old-k8s-version-295154 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:01:33.078706  493486 oci.go:144] the created container "old-k8s-version-295154" has a running status.
	I1129 09:01:33.078890  493486 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa...
	I1129 09:01:33.213970  493486 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:01:33.251103  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:29.709142  494126 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:01:29.709367  494126 start.go:159] libmachine.API.Create for "no-preload-924441" (driver="docker")
	I1129 09:01:29.709398  494126 client.go:173] LocalClient.Create starting
	I1129 09:01:29.709475  494126 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem
	I1129 09:01:29.709526  494126 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:29.709553  494126 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:29.709629  494126 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem
	I1129 09:01:29.709661  494126 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:29.709679  494126 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:29.710082  494126 cli_runner.go:164] Run: docker network inspect no-preload-924441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:01:29.727862  494126 cli_runner.go:211] docker network inspect no-preload-924441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:01:29.727982  494126 network_create.go:284] running [docker network inspect no-preload-924441] to gather additional debugging logs...
	I1129 09:01:29.728011  494126 cli_runner.go:164] Run: docker network inspect no-preload-924441
	W1129 09:01:29.747053  494126 cli_runner.go:211] docker network inspect no-preload-924441 returned with exit code 1
	I1129 09:01:29.747092  494126 network_create.go:287] error running [docker network inspect no-preload-924441]: docker network inspect no-preload-924441: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-924441 not found
	I1129 09:01:29.747129  494126 network_create.go:289] output of [docker network inspect no-preload-924441]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-924441 not found
	
	** /stderr **
	I1129 09:01:29.747297  494126 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:29.769138  494126 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f69c672bf913 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:40:f4:ed:4f:ab} reservation:<nil>}
	I1129 09:01:29.769961  494126 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-96d20aff5877 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:01:e2:a3:b8:33} reservation:<nil>}
	I1129 09:01:29.770795  494126 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7906c56f869 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:29:75:e3:e0:7f} reservation:<nil>}
	I1129 09:01:29.771440  494126 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-aea341d97cf5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:fb:22:ff:e0:65} reservation:<nil>}
	I1129 09:01:29.771972  494126 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5ec7c7346e1b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:a5:df:dd:c8:cf} reservation:<nil>}
	I1129 09:01:29.772536  494126 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-ede9a8c5c6b0 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:6e:06:75:02:7a} reservation:<nil>}
	I1129 09:01:29.773382  494126 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00201aa40}
	I1129 09:01:29.773412  494126 network_create.go:124] attempt to create docker network no-preload-924441 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1129 09:01:29.773492  494126 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-924441 no-preload-924441
	I1129 09:01:29.826699  494126 network_create.go:108] docker network no-preload-924441 192.168.103.0/24 created
	I1129 09:01:29.826822  494126 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-924441" container
	I1129 09:01:29.826907  494126 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:01:29.848520  494126 cli_runner.go:164] Run: docker volume create no-preload-924441 --label name.minikube.sigs.k8s.io=no-preload-924441 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:01:29.870388  494126 oci.go:103] Successfully created a docker volume no-preload-924441
	I1129 09:01:29.870496  494126 cli_runner.go:164] Run: docker run --rm --name no-preload-924441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-924441 --entrypoint /usr/bin/test -v no-preload-924441:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:01:32.848045  494126 cli_runner.go:217] Completed: docker run --rm --name no-preload-924441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-924441 --entrypoint /usr/bin/test -v no-preload-924441:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.977502795s)
	I1129 09:01:32.848077  494126 oci.go:107] Successfully prepared a docker volume no-preload-924441
	I1129 09:01:32.848131  494126 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1129 09:01:32.848227  494126 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:01:32.848271  494126 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:01:32.848312  494126 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:01:32.909124  494126 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-924441 --name no-preload-924441 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-924441 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-924441 --network no-preload-924441 --ip 192.168.103.2 --volume no-preload-924441:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:01:33.229639  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Running}}
	I1129 09:01:33.257967  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:01:33.283525  494126 cli_runner.go:164] Run: docker exec no-preload-924441 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:01:33.358911  494126 oci.go:144] the created container "no-preload-924441" has a running status.
	I1129 09:01:33.358964  494126 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa...
	I1129 09:01:33.456248  494126 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:01:33.491041  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:01:33.515555  494126 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:01:33.515581  494126 kic_runner.go:114] Args: [docker exec --privileged no-preload-924441 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:01:33.567971  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:01:33.599907  494126 machine.go:94] provisionDockerMachine start ...
	I1129 09:01:33.599999  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:33.634873  494126 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.635521  494126 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:01:33.635590  494126 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:01:33.636667  494126 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34766->127.0.0.1:33063: read: connection reset by peer
	I1129 09:01:29.724136  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:29.724608  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:29.724657  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:29.724702  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:29.763194  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:29.763266  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:29.763286  460401 cri.go:89] found id: ""
	I1129 09:01:29.763304  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:29.763372  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.769877  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.774814  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:29.774887  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:29.810078  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:29.810105  460401 cri.go:89] found id: ""
	I1129 09:01:29.810116  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:29.810167  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.815272  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:29.815349  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:29.851653  460401 cri.go:89] found id: ""
	I1129 09:01:29.851680  460401 logs.go:282] 0 containers: []
	W1129 09:01:29.851691  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:29.851700  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:29.851773  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:29.883424  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:29.883449  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:29.883456  460401 cri.go:89] found id: ""
	I1129 09:01:29.883466  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:29.883537  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.889105  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.894072  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:29.894150  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:29.924971  460401 cri.go:89] found id: ""
	I1129 09:01:29.925006  460401 logs.go:282] 0 containers: []
	W1129 09:01:29.925019  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:29.925027  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:29.925129  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:29.954168  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:29.954194  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:29.954199  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:29.954203  460401 cri.go:89] found id: ""
	I1129 09:01:29.954214  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:29.954278  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.959542  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.964240  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.968754  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:29.968820  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:29.999663  460401 cri.go:89] found id: ""
	I1129 09:01:29.999685  460401 logs.go:282] 0 containers: []
	W1129 09:01:29.999694  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:29.999700  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:29.999780  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:30.029803  460401 cri.go:89] found id: ""
	I1129 09:01:30.029833  460401 logs.go:282] 0 containers: []
	W1129 09:01:30.029845  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:30.029859  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:30.029877  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:30.069873  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:30.069904  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:30.108923  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:30.108958  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:30.146649  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:30.146682  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:30.190480  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:30.190514  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:30.225134  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:30.225167  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:30.299416  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:30.299461  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:30.314711  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:30.314766  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:30.384833  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:30.384856  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:30.384879  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:30.420690  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:30.420720  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:30.476182  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:30.476221  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:30.507666  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:30.507698  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:30.536613  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:30.536640  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:33.076844  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:33.077304  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:33.077371  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:33.077426  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:33.111899  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:33.111922  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:33.111928  460401 cri.go:89] found id: ""
	I1129 09:01:33.111938  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:33.111995  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.117191  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.122615  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:33.122688  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:33.163794  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:33.163822  460401 cri.go:89] found id: ""
	I1129 09:01:33.163834  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:33.163897  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.170244  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:33.170334  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:33.203629  460401 cri.go:89] found id: ""
	I1129 09:01:33.203662  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.203675  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:33.203683  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:33.203759  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:33.248112  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:33.248142  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:33.248148  460401 cri.go:89] found id: ""
	I1129 09:01:33.248159  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:33.248226  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.255192  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.262339  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:33.262419  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:33.308727  460401 cri.go:89] found id: ""
	I1129 09:01:33.308855  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.308869  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:33.308878  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:33.309309  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:33.361181  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:33.361234  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:33.361241  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:33.361245  460401 cri.go:89] found id: ""
	I1129 09:01:33.361255  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:33.361343  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.368091  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.374495  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.380899  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:33.380965  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:33.430643  460401 cri.go:89] found id: ""
	I1129 09:01:33.430670  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.430681  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:33.430689  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:33.430771  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:33.467019  460401 cri.go:89] found id: ""
	I1129 09:01:33.467047  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.467058  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:33.467072  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:33.467091  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:33.529538  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:33.529588  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:33.591866  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:33.591912  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:33.664144  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:33.664179  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:33.701152  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:33.701195  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:33.735624  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:33.735669  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:33.774144  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:33.774175  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:33.808426  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:33.808461  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:33.898471  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:33.898509  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:33.914358  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:33.914394  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:33.978927  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:33.978954  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:33.978975  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:34.016239  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:34.016268  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:34.055208  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:34.055239  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:33.275806  493486 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:01:33.275832  493486 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-295154 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:01:33.349350  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:33.378383  493486 machine.go:94] provisionDockerMachine start ...
	I1129 09:01:33.378475  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:33.410015  493486 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.410367  493486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:01:33.410384  493486 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:01:33.577990  493486 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-295154
	
	I1129 09:01:33.578018  493486 ubuntu.go:182] provisioning hostname "old-k8s-version-295154"
	I1129 09:01:33.578086  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:33.609401  493486 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.609890  493486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:01:33.609953  493486 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-295154 && echo "old-k8s-version-295154" | sudo tee /etc/hostname
	I1129 09:01:33.789112  493486 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-295154
	
	I1129 09:01:33.789205  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:33.813423  493486 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.813741  493486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:01:33.813774  493486 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-295154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-295154/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-295154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:01:33.966671  493486 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:01:33.966701  493486 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-255825/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-255825/.minikube}
	I1129 09:01:33.966720  493486 ubuntu.go:190] setting up certificates
	I1129 09:01:33.966746  493486 provision.go:84] configureAuth start
	I1129 09:01:33.966809  493486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-295154
	I1129 09:01:33.987509  493486 provision.go:143] copyHostCerts
	I1129 09:01:33.987591  493486 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem, removing ...
	I1129 09:01:33.987609  493486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem
	I1129 09:01:33.987703  493486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem (1078 bytes)
	I1129 09:01:33.987854  493486 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem, removing ...
	I1129 09:01:33.987873  493486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem
	I1129 09:01:33.987926  493486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem (1123 bytes)
	I1129 09:01:33.988030  493486 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem, removing ...
	I1129 09:01:33.988043  493486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem
	I1129 09:01:33.988093  493486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem (1679 bytes)
	I1129 09:01:33.988197  493486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-295154 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-295154]
	I1129 09:01:34.173289  493486 provision.go:177] copyRemoteCerts
	I1129 09:01:34.173365  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:01:34.173409  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.192053  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.294293  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:01:34.313898  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1129 09:01:34.331337  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:01:34.348272  493486 provision.go:87] duration metric: took 381.510752ms to configureAuth
	I1129 09:01:34.348301  493486 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:01:34.348457  493486 config.go:182] Loaded profile config "old-k8s-version-295154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:01:34.348472  493486 machine.go:97] duration metric: took 970.068662ms to provisionDockerMachine
	I1129 09:01:34.348481  493486 client.go:176] duration metric: took 5.886553133s to LocalClient.Create
	I1129 09:01:34.348502  493486 start.go:167] duration metric: took 5.88663904s to libmachine.API.Create "old-k8s-version-295154"
	I1129 09:01:34.348512  493486 start.go:293] postStartSetup for "old-k8s-version-295154" (driver="docker")
	I1129 09:01:34.348520  493486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:01:34.348570  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:01:34.348614  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.366501  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.469910  493486 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:01:34.473823  493486 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:01:34.473855  493486 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:01:34.473868  493486 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/addons for local assets ...
	I1129 09:01:34.473922  493486 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/files for local assets ...
	I1129 09:01:34.474038  493486 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem -> 2594832.pem in /etc/ssl/certs
	I1129 09:01:34.474177  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:01:34.481912  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:34.502433  493486 start.go:296] duration metric: took 153.905912ms for postStartSetup
	I1129 09:01:34.502813  493486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-295154
	I1129 09:01:34.520071  493486 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/config.json ...
	I1129 09:01:34.520308  493486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:01:34.520347  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.539111  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.640199  493486 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:01:34.644901  493486 start.go:128] duration metric: took 6.185289215s to createHost
	I1129 09:01:34.644928  493486 start.go:83] releasing machines lock for "old-k8s-version-295154", held for 6.185484113s
	I1129 09:01:34.644991  493486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-295154
	I1129 09:01:34.662525  493486 ssh_runner.go:195] Run: cat /version.json
	I1129 09:01:34.662583  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.662584  493486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:01:34.662648  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.679837  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.681115  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.833568  493486 ssh_runner.go:195] Run: systemctl --version
	I1129 09:01:34.840355  493486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:01:34.844844  493486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:01:34.844907  493486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:01:34.869137  493486 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:01:34.869161  493486 start.go:496] detecting cgroup driver to use...
	I1129 09:01:34.869194  493486 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:01:34.869251  493486 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:01:34.883461  493486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:01:34.895885  493486 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:01:34.895942  493486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:01:34.912002  493486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:01:34.929350  493486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:01:35.015369  493486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:01:35.101537  493486 docker.go:234] disabling docker service ...
	I1129 09:01:35.101597  493486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:01:35.120759  493486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:01:35.133226  493486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:01:35.217122  493486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:01:35.301702  493486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:01:35.314440  493486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:01:35.328312  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1129 09:01:35.338331  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:01:35.346975  493486 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1129 09:01:35.347033  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1129 09:01:35.355511  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:35.363986  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:01:35.372342  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:35.380589  493486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:01:35.388205  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:01:35.396344  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:01:35.404459  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:01:35.412783  493486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:01:35.420177  493486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:01:35.427378  493486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:35.508150  493486 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:01:35.605801  493486 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:01:35.605868  493486 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:01:35.610095  493486 start.go:564] Will wait 60s for crictl version
	I1129 09:01:35.610140  493486 ssh_runner.go:195] Run: which crictl
	I1129 09:01:35.613826  493486 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:01:35.640869  493486 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:01:35.640947  493486 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:35.662573  493486 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:35.686990  493486 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1129 09:01:35.688126  493486 cli_runner.go:164] Run: docker network inspect old-k8s-version-295154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:35.705269  493486 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:01:35.709565  493486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:35.720029  493486 kubeadm.go:884] updating cluster {Name:old-k8s-version-295154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-295154 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:01:35.720146  493486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 09:01:35.720192  493486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:01:35.745337  493486 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:01:35.745359  493486 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:01:35.745433  493486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:01:35.768552  493486 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:01:35.768573  493486 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:01:35.768582  493486 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1129 09:01:35.768708  493486 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-295154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-295154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:01:35.768800  493486 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:01:35.793684  493486 cni.go:84] Creating CNI manager for ""
	I1129 09:01:35.793704  493486 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:35.793722  493486 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:01:35.793760  493486 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-295154 NodeName:old-k8s-version-295154 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:01:35.793881  493486 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-295154"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:01:35.793941  493486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1129 09:01:35.801702  493486 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:01:35.801779  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:01:35.809370  493486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1129 09:01:35.821645  493486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:01:35.837123  493486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1129 09:01:35.849282  493486 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:01:35.852777  493486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:35.862291  493486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:35.945522  493486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:01:35.967020  493486 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154 for IP: 192.168.76.2
	I1129 09:01:35.967046  493486 certs.go:195] generating shared ca certs ...
	I1129 09:01:35.967066  493486 certs.go:227] acquiring lock for ca certs: {Name:mk5e6bcae0a6944966b241f3c6197a472703c991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:35.967208  493486 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key
	I1129 09:01:35.967259  493486 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key
	I1129 09:01:35.967269  493486 certs.go:257] generating profile certs ...
	I1129 09:01:35.967334  493486 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.key
	I1129 09:01:35.967347  493486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt with IP's: []
	I1129 09:01:36.097254  493486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt ...
	I1129 09:01:36.097290  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: {Name:mk21cfae97f1407d02cd99fe2a74be759b699397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.097496  493486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.key ...
	I1129 09:01:36.097514  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.key: {Name:mk0736bb845004e9c4d4a2d8602930ec0568eec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.097631  493486 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72
	I1129 09:01:36.097693  493486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1129 09:01:36.144552  493486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72 ...
	I1129 09:01:36.144579  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72: {Name:mk3fedcec97acb487835213600ee8b696c362f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.144774  493486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72 ...
	I1129 09:01:36.144793  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72: {Name:mk9dc52d2daf1391895a4ee3c561f559be0e2755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.144904  493486 certs.go:382] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72 -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt
	I1129 09:01:36.145012  493486 certs.go:386] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72 -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key
	I1129 09:01:36.145117  493486 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key
	I1129 09:01:36.145138  493486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt with IP's: []
	I1129 09:01:36.307914  493486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt ...
	I1129 09:01:36.307946  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt: {Name:mk698ad1b9e2e29d385fd97b123d5b48273c6d5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.308144  493486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key ...
	I1129 09:01:36.308172  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key: {Name:mkcfd3db96260b6b8677060f32dcbd4dd8f838bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.308432  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem (1338 bytes)
	W1129 09:01:36.308490  493486 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483_empty.pem, impossibly tiny 0 bytes
	I1129 09:01:36.308506  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:01:36.308543  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:01:36.308590  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:01:36.308633  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem (1679 bytes)
	I1129 09:01:36.308689  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:36.309360  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:01:36.328372  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:01:36.345872  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:01:36.363285  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:01:36.380427  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1129 09:01:36.397563  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:01:36.414929  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:01:36.432334  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:01:36.449233  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /usr/share/ca-certificates/2594832.pem (1708 bytes)
	I1129 09:01:36.469085  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:01:36.485869  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem --> /usr/share/ca-certificates/259483.pem (1338 bytes)
	I1129 09:01:36.502784  493486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:01:36.515208  493486 ssh_runner.go:195] Run: openssl version
	I1129 09:01:36.521390  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:01:36.529514  493486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:36.533021  493486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:36.533062  493486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:36.567579  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:01:36.576162  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259483.pem && ln -fs /usr/share/ca-certificates/259483.pem /etc/ssl/certs/259483.pem"
	I1129 09:01:36.584343  493486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259483.pem
	I1129 09:01:36.588122  493486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:35 /usr/share/ca-certificates/259483.pem
	I1129 09:01:36.588176  493486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259483.pem
	I1129 09:01:36.626659  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259483.pem /etc/ssl/certs/51391683.0"
	I1129 09:01:36.635780  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2594832.pem && ln -fs /usr/share/ca-certificates/2594832.pem /etc/ssl/certs/2594832.pem"
	I1129 09:01:36.644862  493486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2594832.pem
	I1129 09:01:36.648851  493486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:35 /usr/share/ca-certificates/2594832.pem
	I1129 09:01:36.648906  493486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2594832.pem
	I1129 09:01:36.691340  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2594832.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:01:36.701173  493486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:01:36.705050  493486 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:01:36.705110  493486 kubeadm.go:401] StartCluster: {Name:old-k8s-version-295154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-295154 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:01:36.705201  493486 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:01:36.705272  493486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:01:36.734535  493486 cri.go:89] found id: ""
	I1129 09:01:36.734592  493486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:01:36.743400  493486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:01:36.751273  493486 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:01:36.751332  493486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:01:36.760386  493486 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:01:36.760404  493486 kubeadm.go:158] found existing configuration files:
	
	I1129 09:01:36.760450  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:01:36.768796  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:01:36.768854  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:01:36.776326  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:01:36.784663  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:01:36.784720  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:01:36.793650  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:01:36.801817  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:01:36.801887  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:01:36.811081  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:01:36.819075  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:01:36.819130  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:01:36.827369  493486 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:01:36.885752  493486 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1129 09:01:36.885824  493486 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:01:36.932588  493486 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:01:36.932993  493486 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:01:36.933139  493486 kubeadm.go:319] OS: Linux
	I1129 09:01:36.933232  493486 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:01:36.933332  493486 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:01:36.933468  493486 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:01:36.933539  493486 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:01:36.933597  493486 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:01:36.933656  493486 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:01:36.933717  493486 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:01:36.933794  493486 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:01:37.018039  493486 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:01:37.018169  493486 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:01:37.018319  493486 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1129 09:01:37.171075  493486 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:01:37.173428  493486 out.go:252]   - Generating certificates and keys ...
	I1129 09:01:37.173535  493486 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:01:37.173613  493486 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:01:37.301964  493486 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:01:37.410711  493486 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:01:37.550821  493486 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:01:37.787553  493486 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:01:37.889172  493486 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:01:37.889414  493486 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-295154] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:01:38.063017  493486 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:01:38.063214  493486 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-295154] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:01:38.202234  493486 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:01:38.262563  493486 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:01:36.787780  494126 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-924441
	
	I1129 09:01:36.787807  494126 ubuntu.go:182] provisioning hostname "no-preload-924441"
	I1129 09:01:36.787868  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:36.808836  494126 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:36.809153  494126 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:01:36.809173  494126 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-924441 && echo "no-preload-924441" | sudo tee /etc/hostname
	I1129 09:01:36.973090  494126 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-924441
	
	I1129 09:01:36.973172  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:36.993095  494126 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:36.993348  494126 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:01:36.993366  494126 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-924441' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-924441/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-924441' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:01:37.147252  494126 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:01:37.147286  494126 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-255825/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-255825/.minikube}
	I1129 09:01:37.147336  494126 ubuntu.go:190] setting up certificates
	I1129 09:01:37.147350  494126 provision.go:84] configureAuth start
	I1129 09:01:37.147407  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-924441
	I1129 09:01:37.167771  494126 provision.go:143] copyHostCerts
	I1129 09:01:37.167841  494126 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem, removing ...
	I1129 09:01:37.167856  494126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem
	I1129 09:01:37.167941  494126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem (1078 bytes)
	I1129 09:01:37.168073  494126 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem, removing ...
	I1129 09:01:37.168087  494126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem
	I1129 09:01:37.168135  494126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem (1123 bytes)
	I1129 09:01:37.168246  494126 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem, removing ...
	I1129 09:01:37.168259  494126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem
	I1129 09:01:37.168304  494126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem (1679 bytes)
	I1129 09:01:37.168383  494126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem org=jenkins.no-preload-924441 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-924441]
	I1129 09:01:37.302569  494126 provision.go:177] copyRemoteCerts
	I1129 09:01:37.302625  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:01:37.302676  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.320965  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.425520  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:01:37.446589  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:01:37.463963  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:01:37.480486  494126 provision.go:87] duration metric: took 333.119398ms to configureAuth
	I1129 09:01:37.480511  494126 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:01:37.480667  494126 config.go:182] Loaded profile config "no-preload-924441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:01:37.480680  494126 machine.go:97] duration metric: took 3.880753165s to provisionDockerMachine
	I1129 09:01:37.480691  494126 client.go:176] duration metric: took 7.771282469s to LocalClient.Create
	I1129 09:01:37.480714  494126 start.go:167] duration metric: took 7.771346771s to libmachine.API.Create "no-preload-924441"
	I1129 09:01:37.480726  494126 start.go:293] postStartSetup for "no-preload-924441" (driver="docker")
	I1129 09:01:37.480750  494126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:01:37.480814  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:01:37.480883  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.498996  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.602864  494126 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:01:37.606394  494126 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:01:37.606428  494126 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:01:37.606439  494126 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/addons for local assets ...
	I1129 09:01:37.606502  494126 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/files for local assets ...
	I1129 09:01:37.606593  494126 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem -> 2594832.pem in /etc/ssl/certs
	I1129 09:01:37.606724  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:01:37.614670  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:37.635134  494126 start.go:296] duration metric: took 154.380805ms for postStartSetup
	I1129 09:01:37.635554  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-924441
	I1129 09:01:37.655528  494126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/config.json ...
	I1129 09:01:37.655850  494126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:01:37.655900  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.677317  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.781275  494126 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:01:37.786042  494126 start.go:128] duration metric: took 8.07881841s to createHost
	I1129 09:01:37.786069  494126 start.go:83] releasing machines lock for "no-preload-924441", held for 8.078998368s
	I1129 09:01:37.786141  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-924441
	I1129 09:01:37.805459  494126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:01:37.805494  494126 ssh_runner.go:195] Run: cat /version.json
	I1129 09:01:37.805552  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.805561  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.824515  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.825042  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.978797  494126 ssh_runner.go:195] Run: systemctl --version
	I1129 09:01:37.985561  494126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:01:37.990121  494126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:01:37.990198  494126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:01:38.014806  494126 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:01:38.014833  494126 start.go:496] detecting cgroup driver to use...
	I1129 09:01:38.014872  494126 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:01:38.014922  494126 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:01:38.028890  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:01:38.040635  494126 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:01:38.040704  494126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:01:38.059274  494126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:01:38.079903  494126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:01:38.160895  494126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:01:38.248638  494126 docker.go:234] disabling docker service ...
	I1129 09:01:38.248693  494126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:01:38.270699  494126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:01:38.283241  494126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:01:38.364018  494126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:01:38.451578  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:01:38.464900  494126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:01:38.478711  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:01:38.488688  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:01:38.497188  494126 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1129 09:01:38.497235  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1129 09:01:38.506143  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:38.514500  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:01:38.522578  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:38.530605  494126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:01:38.538074  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:01:38.546395  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:01:38.554633  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:01:38.564192  494126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:01:38.571328  494126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:01:38.578488  494126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:38.657072  494126 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:01:38.731899  494126 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:01:38.731970  494126 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:01:38.736165  494126 start.go:564] Will wait 60s for crictl version
	I1129 09:01:38.736223  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:38.739821  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:01:38.765727  494126 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:01:38.765799  494126 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:38.788554  494126 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:38.813801  494126 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:01:38.554215  493486 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:01:38.554337  493486 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:01:38.871587  493486 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:01:39.076048  493486 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:01:39.365556  493486 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:01:39.428949  493486 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:01:39.429579  493486 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:01:39.438444  493486 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:01:38.814940  494126 cli_runner.go:164] Run: docker network inspect no-preload-924441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:38.832444  494126 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1129 09:01:38.836556  494126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:38.846826  494126 kubeadm.go:884] updating cluster {Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:01:38.846940  494126 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:01:38.846988  494126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:01:38.875513  494126 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1129 09:01:38.875537  494126 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1129 09:01:38.875606  494126 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:38.875606  494126 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:38.875633  494126 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:38.875642  494126 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:38.875663  494126 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:38.875672  494126 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:38.875613  494126 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1129 09:01:38.875710  494126 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:38.877065  494126 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:38.877082  494126 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:38.877098  494126 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:38.877104  494126 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:38.877132  494126 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:38.877185  494126 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:38.877233  494126 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:38.877189  494126 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1129 09:01:39.045541  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1129 09:01:39.045605  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.049466  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1129 09:01:39.049525  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.055696  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1129 09:01:39.055787  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.065913  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1129 09:01:39.065987  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.071326  494126 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1129 09:01:39.071386  494126 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.071433  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.072494  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1129 09:01:39.072560  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.074055  494126 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1129 09:01:39.074103  494126 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.074155  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.079805  494126 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1129 09:01:39.079853  494126 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.079906  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.090225  494126 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1129 09:01:39.090271  494126 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.090279  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.090318  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.094954  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1129 09:01:39.095016  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.096356  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.096365  494126 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1129 09:01:39.096402  494126 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.096438  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.096440  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.108053  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1129 09:01:39.108111  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1129 09:01:39.125198  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.125300  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.125361  494126 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1129 09:01:39.125408  494126 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.125455  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.128374  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.132565  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.132640  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.138113  494126 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1129 09:01:39.138163  494126 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1129 09:01:39.138200  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.167013  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.167128  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.167330  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.167330  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.167996  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.173113  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.173171  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:01:39.214078  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1129 09:01:39.214193  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:01:39.214389  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.214576  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.220552  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1129 09:01:39.220649  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:01:39.220857  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1129 09:01:39.220895  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1129 09:01:39.222433  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.222493  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:01:39.222587  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1129 09:01:39.222669  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:01:39.275608  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.275622  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1129 09:01:39.275679  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1129 09:01:39.275707  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1129 09:01:39.275716  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:01:39.287672  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:01:39.287708  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1129 09:01:39.287708  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1129 09:01:39.287808  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1129 09:01:39.287825  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:01:39.339051  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1129 09:01:39.339082  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1129 09:01:39.339092  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1129 09:01:39.339110  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1129 09:01:39.339137  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1129 09:01:39.339173  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:01:39.339202  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1129 09:01:39.339317  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1129 09:01:39.424948  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1129 09:01:39.424997  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1129 09:01:39.425030  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1129 09:01:39.425058  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1129 09:01:36.592807  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:36.593240  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:36.593304  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:36.593360  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:36.620981  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:36.621002  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:36.621008  460401 cri.go:89] found id: ""
	I1129 09:01:36.621018  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:36.621079  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.627593  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.632350  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:36.632420  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:36.660070  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:36.660091  460401 cri.go:89] found id: ""
	I1129 09:01:36.660100  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:36.660156  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.664644  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:36.664720  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:36.696935  460401 cri.go:89] found id: ""
	I1129 09:01:36.696967  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.696977  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:36.696985  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:36.697045  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:36.726832  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:36.726857  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:36.726864  460401 cri.go:89] found id: ""
	I1129 09:01:36.726874  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:36.726928  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.732693  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.737783  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:36.737848  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:36.765201  460401 cri.go:89] found id: ""
	I1129 09:01:36.765229  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.765238  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:36.765245  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:36.765300  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:36.795203  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:36.795231  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:36.795237  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:36.795242  460401 cri.go:89] found id: ""
	I1129 09:01:36.795251  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:36.795316  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.801008  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.806325  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.811017  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:36.811088  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:36.840359  460401 cri.go:89] found id: ""
	I1129 09:01:36.840386  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.840397  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:36.840406  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:36.840469  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:36.874045  460401 cri.go:89] found id: ""
	I1129 09:01:36.874068  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.874075  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:36.874085  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:36.874099  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:36.950404  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:36.950426  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:36.950442  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:36.994232  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:36.994264  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:37.049507  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:37.049546  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:37.087133  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:37.087165  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:37.117577  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:37.117602  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:37.154176  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:37.154210  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:37.197090  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:37.197121  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:37.240775  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:37.240811  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:37.269234  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:37.269260  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:37.312948  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:37.312979  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:37.348500  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:37.348527  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:37.435755  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:37.435786  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:39.440026  493486 out.go:252]   - Booting up control plane ...
	I1129 09:01:39.440161  493486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:01:39.440285  493486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:01:39.440970  493486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:01:39.459308  493486 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:01:39.460971  493486 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:01:39.461057  493486 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:01:39.610284  493486 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1129 09:01:39.952440  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:39.952996  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:39.953076  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:39.953145  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:39.990073  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:39.990100  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:39.990107  460401 cri.go:89] found id: ""
	I1129 09:01:39.990117  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:39.990183  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.996871  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.002374  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:40.002458  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:40.036502  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:40.036525  460401 cri.go:89] found id: ""
	I1129 09:01:40.036542  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:40.036600  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.044171  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:40.044261  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:40.084048  460401 cri.go:89] found id: ""
	I1129 09:01:40.084165  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.084184  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:40.084195  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:40.084329  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:40.116869  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:40.116899  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:40.116905  460401 cri.go:89] found id: ""
	I1129 09:01:40.116916  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:40.116982  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.123222  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.128079  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:40.128146  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:40.159071  460401 cri.go:89] found id: ""
	I1129 09:01:40.159101  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.159112  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:40.159120  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:40.159178  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:40.191945  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:40.191973  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:40.191979  460401 cri.go:89] found id: ""
	I1129 09:01:40.191990  460401 logs.go:282] 2 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:40.192055  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.197191  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.202276  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:40.202350  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:40.236481  460401 cri.go:89] found id: ""
	I1129 09:01:40.236510  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.236521  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:40.236528  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:40.236597  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:40.266476  460401 cri.go:89] found id: ""
	I1129 09:01:40.266505  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.266516  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:40.266529  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:40.266547  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:40.310670  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:40.310713  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:40.362446  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:40.362487  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:40.399108  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:40.399138  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:40.435770  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:40.435799  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:40.485497  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:40.485541  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:40.502944  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:40.502977  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:40.592582  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:40.592610  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:40.592626  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:40.634792  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:40.634828  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:40.678348  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:40.678382  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:40.797799  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:40.797849  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:40.854148  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:40.854196  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:43.404360  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:43.404858  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:43.404925  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:43.404996  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:43.435800  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:43.435836  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:43.435843  460401 cri.go:89] found id: ""
	I1129 09:01:43.435854  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:43.435923  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.441287  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.445761  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:43.445837  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:43.474830  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:43.474859  460401 cri.go:89] found id: ""
	I1129 09:01:43.474870  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:43.474932  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.481397  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:43.481483  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:43.513967  460401 cri.go:89] found id: ""
	I1129 09:01:43.513995  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.514006  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:43.514014  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:43.514074  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:43.550388  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:43.550416  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:43.550421  460401 cri.go:89] found id: ""
	I1129 09:01:43.550431  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:43.550505  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.557316  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.563173  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:43.563248  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:43.599482  460401 cri.go:89] found id: ""
	I1129 09:01:43.599524  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.599535  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:43.599545  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:43.599611  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:43.637030  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:43.637053  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:43.637059  460401 cri.go:89] found id: ""
	I1129 09:01:43.637069  460401 logs.go:282] 2 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:43.637130  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.643786  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.650011  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:43.650089  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:43.687244  460401 cri.go:89] found id: ""
	I1129 09:01:43.687273  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.687295  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:43.687303  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:43.687372  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:43.726453  460401 cri.go:89] found id: ""
	I1129 09:01:43.726490  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.726501  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:43.726515  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:43.726533  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:43.795442  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:43.795490  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:43.841417  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:43.841457  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:43.888511  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:43.888554  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:43.930753  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:43.930789  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:44.043358  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:44.043410  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:44.065065  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:44.065107  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:44.112915  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:44.112958  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:44.174077  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:44.174120  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:44.247887  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:44.247909  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:44.247927  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:44.290842  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:44.290882  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:44.335297  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:44.335330  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:39.522040  494126 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1129 09:01:39.522116  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1129 09:01:39.664265  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1129 09:01:39.664314  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:01:39.664386  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:01:40.291377  494126 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1129 09:01:40.291450  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:40.811289  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.146868238s)
	I1129 09:01:40.811331  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1129 09:01:40.811358  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:01:40.811407  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:01:40.811531  494126 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1129 09:01:40.811570  494126 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:40.811610  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:41.858427  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.046983131s)
	I1129 09:01:41.858463  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1129 09:01:41.858488  494126 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:01:41.858484  494126 ssh_runner.go:235] Completed: which crictl: (1.046843529s)
	I1129 09:01:41.858549  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:01:41.858557  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:43.352594  494126 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.494004994s)
	I1129 09:01:43.352634  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.49406142s)
	I1129 09:01:43.352657  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1129 09:01:43.352684  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:01:43.352721  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:43.352741  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:01:44.495181  494126 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.142420788s)
	I1129 09:01:44.495251  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.142485031s)
	I1129 09:01:44.495274  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:44.495280  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1129 09:01:44.495307  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:01:44.495357  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:01:44.611298  493486 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002099 seconds
	I1129 09:01:44.611461  493486 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:01:44.626505  493486 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:01:45.150669  493486 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:01:45.150981  493486 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-295154 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:01:45.666153  493486 kubeadm.go:319] [bootstrap-token] Using token: fc3siq.brm7sjv6bjwb7j34
	I1129 09:01:45.667757  493486 out.go:252]   - Configuring RBAC rules ...
	I1129 09:01:45.667991  493486 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:01:45.673404  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:01:45.685336  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:01:45.691974  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:01:45.695311  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:01:45.698699  493486 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:01:45.712796  493486 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:01:45.913473  493486 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:01:46.081267  493486 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:01:46.081993  493486 kubeadm.go:319] 
	I1129 09:01:46.082087  493486 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:01:46.082095  493486 kubeadm.go:319] 
	I1129 09:01:46.082160  493486 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:01:46.082179  493486 kubeadm.go:319] 
	I1129 09:01:46.082199  493486 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:01:46.082251  493486 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:01:46.082302  493486 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:01:46.082308  493486 kubeadm.go:319] 
	I1129 09:01:46.082372  493486 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:01:46.082377  493486 kubeadm.go:319] 
	I1129 09:01:46.082434  493486 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:01:46.082445  493486 kubeadm.go:319] 
	I1129 09:01:46.082520  493486 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:01:46.082627  493486 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:01:46.082750  493486 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:01:46.082756  493486 kubeadm.go:319] 
	I1129 09:01:46.082891  493486 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:01:46.083019  493486 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:01:46.083030  493486 kubeadm.go:319] 
	I1129 09:01:46.083149  493486 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fc3siq.brm7sjv6bjwb7j34 \
	I1129 09:01:46.083319  493486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 \
	I1129 09:01:46.083366  493486 kubeadm.go:319] 	--control-plane 
	I1129 09:01:46.083383  493486 kubeadm.go:319] 
	I1129 09:01:46.083539  493486 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:01:46.083561  493486 kubeadm.go:319] 
	I1129 09:01:46.083696  493486 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fc3siq.brm7sjv6bjwb7j34 \
	I1129 09:01:46.083889  493486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 
	I1129 09:01:46.087692  493486 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:01:46.087874  493486 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:01:46.087925  493486 cni.go:84] Creating CNI manager for ""
	I1129 09:01:46.087942  493486 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:46.089437  493486 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:01:46.093295  493486 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:01:46.100033  493486 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1129 09:01:46.100061  493486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:01:46.118046  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:01:47.108562  493486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:01:47.108767  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:47.108838  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-295154 minikube.k8s.io/updated_at=2025_11_29T09_01_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=old-k8s-version-295154 minikube.k8s.io/primary=true
	I1129 09:01:47.209163  493486 ops.go:34] apiserver oom_adj: -16
	I1129 09:01:47.209168  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:47.709726  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:48.209857  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:44.521775  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1129 09:01:44.521916  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:01:45.636811  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.141419574s)
	I1129 09:01:45.636849  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1129 09:01:45.636857  494126 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.114924181s)
	I1129 09:01:45.636879  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1129 09:01:45.636882  494126 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:01:45.636902  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1129 09:01:45.636924  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:01:48.452908  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.815950505s)
	I1129 09:01:48.452936  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1129 09:01:48.452972  494126 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:01:48.453041  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:01:49.370622  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1129 09:01:49.370663  494126 cache_images.go:125] Successfully loaded all cached images
	I1129 09:01:49.370668  494126 cache_images.go:94] duration metric: took 10.495116704s to LoadCachedImages
	I1129 09:01:49.370682  494126 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1129 09:01:49.370811  494126 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-924441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:01:49.370873  494126 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:01:49.397690  494126 cni.go:84] Creating CNI manager for ""
	I1129 09:01:49.397714  494126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:49.397740  494126 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:01:49.397786  494126 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-924441 NodeName:no-preload-924441 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:01:49.397929  494126 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-924441"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:01:49.397999  494126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:01:49.407101  494126 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1129 09:01:49.407180  494126 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1129 09:01:49.415958  494126 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1129 09:01:49.415978  494126 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256
	I1129 09:01:49.416026  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:01:49.416047  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1129 09:01:49.415978  494126 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I1129 09:01:49.416149  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1129 09:01:49.429834  494126 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1129 09:01:49.429872  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1129 09:01:49.429915  494126 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1129 09:01:49.429924  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1129 09:01:49.429943  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1129 09:01:49.438987  494126 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1129 09:01:49.439024  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1129 09:01:46.884140  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:48.710027  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:49.210030  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:49.709395  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:50.209866  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:50.709354  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:51.209979  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:51.710291  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:52.209895  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:52.709970  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:53.209937  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:49.969644  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:01:49.978574  494126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1129 09:01:49.992833  494126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:01:50.009876  494126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1129 09:01:50.023695  494126 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:01:50.027747  494126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:50.038376  494126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:50.121247  494126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:01:50.149394  494126 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441 for IP: 192.168.103.2
	I1129 09:01:50.149417  494126 certs.go:195] generating shared ca certs ...
	I1129 09:01:50.149438  494126 certs.go:227] acquiring lock for ca certs: {Name:mk5e6bcae0a6944966b241f3c6197a472703c991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.149602  494126 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key
	I1129 09:01:50.149703  494126 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key
	I1129 09:01:50.149717  494126 certs.go:257] generating profile certs ...
	I1129 09:01:50.149797  494126 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.key
	I1129 09:01:50.149812  494126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt with IP's: []
	I1129 09:01:50.352856  494126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt ...
	I1129 09:01:50.352896  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: {Name:mk24ad5255d5c075502606493622eaafcc9932fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.353102  494126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.key ...
	I1129 09:01:50.353115  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.key: {Name:mkdb2263ef25fafc1ea0385357022f8199c8aa35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.353223  494126 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b
	I1129 09:01:50.353240  494126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1129 09:01:50.513341  494126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b ...
	I1129 09:01:50.513379  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b: {Name:mk3f760c06958b6df21bcc9bde3527a0c97ad882 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.513582  494126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b ...
	I1129 09:01:50.513601  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b: {Name:mk4c8be15a8f6eca407c52c7afdc7ecb10357a29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.513678  494126 certs.go:382] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt
	I1129 09:01:50.513771  494126 certs.go:386] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key
	I1129 09:01:50.513831  494126 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key
	I1129 09:01:50.513847  494126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt with IP's: []
	I1129 09:01:50.651114  494126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt ...
	I1129 09:01:50.651146  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt: {Name:mkbdace4e62ecdfbe11ae904155295b956ffc842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.651330  494126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key ...
	I1129 09:01:50.651343  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key: {Name:mk14d837fb2449197c689047daf9f07db1da4b8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.651522  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem (1338 bytes)
	W1129 09:01:50.651563  494126 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483_empty.pem, impossibly tiny 0 bytes
	I1129 09:01:50.651573  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:01:50.651652  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:01:50.651691  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:01:50.651714  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem (1679 bytes)
	I1129 09:01:50.651769  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:50.652337  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:01:50.672071  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:01:50.691184  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:01:50.711306  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:01:50.730860  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:01:50.750662  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1129 09:01:50.771690  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:01:50.791789  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:01:50.811356  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem --> /usr/share/ca-certificates/259483.pem (1338 bytes)
	I1129 09:01:50.833983  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /usr/share/ca-certificates/2594832.pem (1708 bytes)
	I1129 09:01:50.853036  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:01:50.871262  494126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:01:50.885099  494126 ssh_runner.go:195] Run: openssl version
	I1129 09:01:50.892072  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259483.pem && ln -fs /usr/share/ca-certificates/259483.pem /etc/ssl/certs/259483.pem"
	I1129 09:01:50.901864  494126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259483.pem
	I1129 09:01:50.906616  494126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:35 /usr/share/ca-certificates/259483.pem
	I1129 09:01:50.906675  494126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259483.pem
	I1129 09:01:50.943595  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259483.pem /etc/ssl/certs/51391683.0"
	I1129 09:01:50.953459  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2594832.pem && ln -fs /usr/share/ca-certificates/2594832.pem /etc/ssl/certs/2594832.pem"
	I1129 09:01:50.962610  494126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2594832.pem
	I1129 09:01:50.966703  494126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:35 /usr/share/ca-certificates/2594832.pem
	I1129 09:01:50.966778  494126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2594832.pem
	I1129 09:01:51.002253  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2594832.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:01:51.012487  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:01:51.022391  494126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:51.026710  494126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:51.026814  494126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:51.063394  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:01:51.073278  494126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:01:51.077328  494126 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:01:51.077396  494126 kubeadm.go:401] StartCluster: {Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:01:51.077489  494126 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:01:51.077532  494126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:01:51.106096  494126 cri.go:89] found id: ""
	I1129 09:01:51.106183  494126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:01:51.115333  494126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:01:51.123937  494126 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:01:51.124003  494126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:01:51.132534  494126 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:01:51.132560  494126 kubeadm.go:158] found existing configuration files:
	
	I1129 09:01:51.132605  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:01:51.140877  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:01:51.140937  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:01:51.149370  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:01:51.157660  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:01:51.157716  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:01:51.165600  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:01:51.173968  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:01:51.174023  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:01:51.182141  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:01:51.190488  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:01:51.190548  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:01:51.198568  494126 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:01:51.257848  494126 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:01:51.317135  494126 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:01:51.885035  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1129 09:01:51.885110  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:51.885188  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:51.917617  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:01:51.917638  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:51.917644  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:51.917647  460401 cri.go:89] found id: ""
	I1129 09:01:51.917655  460401 logs.go:282] 3 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:51.917717  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.923877  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.929304  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.934465  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:51.934561  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:51.963685  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:51.963708  460401 cri.go:89] found id: ""
	I1129 09:01:51.963719  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:51.963801  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.968956  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:51.969028  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:51.996971  460401 cri.go:89] found id: ""
	I1129 09:01:51.997000  460401 logs.go:282] 0 containers: []
	W1129 09:01:51.997007  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:51.997013  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:51.997078  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:52.028822  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:52.028850  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:52.028856  460401 cri.go:89] found id: ""
	I1129 09:01:52.028866  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:52.028936  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.034812  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.039943  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:52.040009  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:52.069835  460401 cri.go:89] found id: ""
	I1129 09:01:52.069866  460401 logs.go:282] 0 containers: []
	W1129 09:01:52.069878  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:52.069886  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:52.069952  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:52.104321  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:52.104340  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:52.104344  460401 cri.go:89] found id: ""
	I1129 09:01:52.104352  460401 logs.go:282] 2 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:52.104402  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.109901  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.114778  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:52.114862  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:52.144981  460401 cri.go:89] found id: ""
	I1129 09:01:52.145005  460401 logs.go:282] 0 containers: []
	W1129 09:01:52.145013  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:52.145019  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:52.145069  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:52.174604  460401 cri.go:89] found id: ""
	I1129 09:01:52.174632  460401 logs.go:282] 0 containers: []
	W1129 09:01:52.174641  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:52.174651  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:52.174665  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:52.207427  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:52.207458  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:52.249558  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:52.249600  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:52.300742  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:52.300785  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:52.385321  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:52.385365  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:52.405491  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:52.405533  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:52.448465  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:52.448502  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:52.489466  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:52.489506  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:52.534107  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:52.534146  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:52.572361  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:52.572401  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:52.606656  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:52.606692  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1129 09:01:53.710005  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:54.209471  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:54.709414  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:55.209967  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:55.709378  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:56.210032  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:56.709982  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:57.209266  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:57.709968  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:58.209425  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:58.303052  493486 kubeadm.go:1114] duration metric: took 11.19438409s to wait for elevateKubeSystemPrivileges
	I1129 09:01:58.303107  493486 kubeadm.go:403] duration metric: took 21.598001105s to StartCluster
	I1129 09:01:58.303162  493486 settings.go:142] acquiring lock: {Name:mk6dbed29e5e99d89b1cbbd9e561d8f8791ae9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:58.303278  493486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:01:58.305561  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:58.305924  493486 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:01:58.306112  493486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:01:58.306351  493486 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:01:58.306713  493486 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-295154"
	I1129 09:01:58.306776  493486 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-295154"
	I1129 09:01:58.306795  493486 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-295154"
	I1129 09:01:58.306776  493486 config.go:182] Loaded profile config "old-k8s-version-295154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:01:58.306807  493486 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-295154"
	I1129 09:01:58.306834  493486 host.go:66] Checking if "old-k8s-version-295154" exists ...
	I1129 09:01:58.307864  493486 out.go:179] * Verifying Kubernetes components...
	I1129 09:01:58.307930  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:58.308039  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:58.309327  493486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:58.335085  493486 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-295154"
	I1129 09:01:58.335144  493486 host.go:66] Checking if "old-k8s-version-295154" exists ...
	I1129 09:01:58.335642  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:58.337139  493486 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:58.338693  493486 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:01:58.338716  493486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:01:58.338899  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:58.368947  493486 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:01:58.368979  493486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:01:58.369072  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:58.378680  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:58.399464  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:58.438617  493486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:01:58.498671  493486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:01:58.528524  493486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:01:58.536443  493486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:01:58.718007  493486 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 09:01:58.719713  493486 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-295154" to be "Ready" ...
	I1129 09:01:58.976512  493486 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 09:02:01.574795  494126 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:02:01.574869  494126 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:02:01.575071  494126 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:02:01.575154  494126 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:02:01.575204  494126 kubeadm.go:319] OS: Linux
	I1129 09:02:01.575304  494126 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:02:01.575403  494126 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:02:01.575496  494126 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:02:01.575567  494126 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:02:01.575645  494126 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:02:01.575713  494126 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:02:01.575809  494126 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:02:01.575872  494126 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:02:01.575964  494126 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:02:01.576092  494126 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:02:01.576217  494126 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:02:01.576325  494126 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:02:01.578171  494126 out.go:252]   - Generating certificates and keys ...
	I1129 09:02:01.578298  494126 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:02:01.578401  494126 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:02:01.578499  494126 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:02:01.578589  494126 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:02:01.578680  494126 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:02:01.578785  494126 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:02:01.578876  494126 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:02:01.579019  494126 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-924441] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1129 09:02:01.579122  494126 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:02:01.579311  494126 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-924441] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1129 09:02:01.579420  494126 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:02:01.579532  494126 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:02:01.579609  494126 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:02:01.579696  494126 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:02:01.579806  494126 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:02:01.579894  494126 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:02:01.579971  494126 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:02:01.580076  494126 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:02:01.580125  494126 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:02:01.580195  494126 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:02:01.580259  494126 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:02:01.582121  494126 out.go:252]   - Booting up control plane ...
	I1129 09:02:01.582267  494126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:02:01.582364  494126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:02:01.582460  494126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:02:01.582603  494126 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:02:01.582773  494126 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:02:01.582902  494126 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:02:01.583026  494126 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:02:01.583068  494126 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:02:01.583182  494126 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:02:01.583325  494126 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:02:01.583413  494126 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001845652s
	I1129 09:02:01.583537  494126 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:02:01.583671  494126 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1129 09:02:01.583787  494126 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:02:01.583879  494126 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:02:01.583985  494126 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.852889014s
	I1129 09:02:01.584071  494126 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.023243656s
	I1129 09:02:01.584163  494126 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00195345s
	I1129 09:02:01.584314  494126 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:02:01.584493  494126 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:02:01.584584  494126 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:02:01.584867  494126 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-924441 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:02:01.584955  494126 kubeadm.go:319] [bootstrap-token] Using token: mvtuq7.pg2byk8o9fh5nfa2
	I1129 09:02:01.587787  494126 out.go:252]   - Configuring RBAC rules ...
	I1129 09:02:01.587916  494126 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:02:01.588028  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:02:01.588232  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:02:01.588384  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:02:01.588517  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:02:01.588635  494126 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:02:01.588779  494126 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:02:01.588837  494126 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:02:01.588907  494126 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:02:01.588916  494126 kubeadm.go:319] 
	I1129 09:02:01.589016  494126 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:02:01.589032  494126 kubeadm.go:319] 
	I1129 09:02:01.589151  494126 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:02:01.589160  494126 kubeadm.go:319] 
	I1129 09:02:01.589205  494126 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:02:01.589280  494126 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:02:01.589374  494126 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:02:01.589388  494126 kubeadm.go:319] 
	I1129 09:02:01.589465  494126 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:02:01.589473  494126 kubeadm.go:319] 
	I1129 09:02:01.589554  494126 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:02:01.589563  494126 kubeadm.go:319] 
	I1129 09:02:01.589607  494126 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:02:01.589671  494126 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:02:01.589782  494126 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:02:01.589795  494126 kubeadm.go:319] 
	I1129 09:02:01.589906  494126 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:02:01.590049  494126 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:02:01.590058  494126 kubeadm.go:319] 
	I1129 09:02:01.590132  494126 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mvtuq7.pg2byk8o9fh5nfa2 \
	I1129 09:02:01.590268  494126 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 \
	I1129 09:02:01.590302  494126 kubeadm.go:319] 	--control-plane 
	I1129 09:02:01.590309  494126 kubeadm.go:319] 
	I1129 09:02:01.590425  494126 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:02:01.590434  494126 kubeadm.go:319] 
	I1129 09:02:01.590567  494126 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mvtuq7.pg2byk8o9fh5nfa2 \
	I1129 09:02:01.590744  494126 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 
	I1129 09:02:01.590761  494126 cni.go:84] Creating CNI manager for ""
	I1129 09:02:01.590770  494126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:02:01.592367  494126 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:01:58.977447  493486 addons.go:530] duration metric: took 671.096745ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:01:59.226693  493486 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-295154" context rescaled to 1 replicas
	W1129 09:02:00.723077  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	W1129 09:02:02.723240  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	I1129 09:02:01.593492  494126 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:02:01.598544  494126 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:02:01.598567  494126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:02:01.615144  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:02:01.883935  494126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:02:01.884024  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:01.884114  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-924441 minikube.k8s.io/updated_at=2025_11_29T09_02_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=no-preload-924441 minikube.k8s.io/primary=true
	I1129 09:02:01.969638  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:01.982178  494126 ops.go:34] apiserver oom_adj: -16
	I1129 09:02:02.470301  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:02.969878  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:03.470379  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:03.970554  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:04.469853  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:02.669495  460401 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062771993s)
	W1129 09:02:02.669547  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1129 09:02:02.669577  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:02.669596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:02.710559  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:02.710605  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:04.970119  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:05.470767  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:05.969852  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:06.052010  494126 kubeadm.go:1114] duration metric: took 4.168052566s to wait for elevateKubeSystemPrivileges
	I1129 09:02:06.052057  494126 kubeadm.go:403] duration metric: took 14.974666914s to StartCluster
	I1129 09:02:06.052081  494126 settings.go:142] acquiring lock: {Name:mk6dbed29e5e99d89b1cbbd9e561d8f8791ae9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:02:06.052174  494126 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:02:06.054258  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:02:06.054571  494126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:02:06.054563  494126 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:02:06.054635  494126 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:02:06.054874  494126 config.go:182] Loaded profile config "no-preload-924441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:02:06.054888  494126 addons.go:70] Setting storage-provisioner=true in profile "no-preload-924441"
	I1129 09:02:06.054933  494126 addons.go:70] Setting default-storageclass=true in profile "no-preload-924441"
	I1129 09:02:06.054947  494126 addons.go:239] Setting addon storage-provisioner=true in "no-preload-924441"
	I1129 09:02:06.054963  494126 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-924441"
	I1129 09:02:06.055012  494126 host.go:66] Checking if "no-preload-924441" exists ...
	I1129 09:02:06.055424  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:02:06.055667  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:02:06.056967  494126 out.go:179] * Verifying Kubernetes components...
	I1129 09:02:06.060417  494126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:02:06.083076  494126 addons.go:239] Setting addon default-storageclass=true in "no-preload-924441"
	I1129 09:02:06.083127  494126 host.go:66] Checking if "no-preload-924441" exists ...
	I1129 09:02:06.083615  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:02:06.086028  494126 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:02:06.087100  494126 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:02:06.087121  494126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:02:06.087200  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:02:06.110337  494126 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:02:06.110366  494126 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:02:06.111183  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:02:06.116769  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:02:06.140007  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:02:06.151655  494126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:02:06.208406  494126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:02:06.241470  494126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:02:06.273558  494126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:02:06.324896  494126 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1129 09:02:06.327889  494126 node_ready.go:35] waiting up to 6m0s for node "no-preload-924441" to be "Ready" ...
	I1129 09:02:06.574594  494126 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1129 09:02:05.223590  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	W1129 09:02:07.223929  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	I1129 09:02:06.575644  494126 addons.go:530] duration metric: took 521.007476ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:02:06.830448  494126 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-924441" context rescaled to 1 replicas
	W1129 09:02:08.331406  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:05.259668  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:07.201576  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:43246->192.168.85.2:8443: read: connection reset by peer
	I1129 09:02:07.201690  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:07.201778  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:07.234753  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:07.234781  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:02:07.234788  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:07.234793  460401 cri.go:89] found id: ""
	I1129 09:02:07.234804  460401 logs.go:282] 3 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:07.234869  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.240257  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.245641  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.251131  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:07.251196  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:07.280579  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:07.280608  460401 cri.go:89] found id: ""
	I1129 09:02:07.280621  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:07.280682  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.286123  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:07.286213  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:07.317491  460401 cri.go:89] found id: ""
	I1129 09:02:07.317519  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.317528  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:07.317534  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:07.317586  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:07.347513  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:07.347534  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:07.347538  460401 cri.go:89] found id: ""
	I1129 09:02:07.347546  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:07.347610  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.353144  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.358223  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:07.358303  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:07.387488  460401 cri.go:89] found id: ""
	I1129 09:02:07.387516  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.387525  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:07.387532  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:07.387595  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:07.418490  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:07.418512  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:07.418516  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:07.418519  460401 cri.go:89] found id: ""
	I1129 09:02:07.418527  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:07.418587  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.423956  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.429140  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.434196  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:07.434281  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:07.463114  460401 cri.go:89] found id: ""
	I1129 09:02:07.463138  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.463148  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:07.463156  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:07.463222  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:07.494533  460401 cri.go:89] found id: ""
	I1129 09:02:07.494567  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.494579  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:07.494592  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:07.494604  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:07.546238  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:07.546282  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:07.634664  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:07.634702  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:07.696753  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:07.696779  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:07.696796  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:07.733303  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:07.733343  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:07.786770  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:07.786809  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:07.824791  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:07.824831  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:07.857029  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:07.857058  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:07.892009  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:07.892046  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:07.907552  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:02:07.907596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	W1129 09:02:07.937558  460401 logs.go:130] failed kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095": Process exited with status 1
	stdout:
	
	stderr:
	E1129 09:02:07.934436    4413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found" containerID="5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	time="2025-11-29T09:02:07Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found"
	 output: 
	** stderr ** 
	E1129 09:02:07.934436    4413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found" containerID="5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	time="2025-11-29T09:02:07Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found"
	
	** /stderr **
	I1129 09:02:07.937577  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:07.937591  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:07.976501  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:07.976553  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:08.017968  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:08.018008  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:08.049057  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:08.049090  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	W1129 09:02:09.723662  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	W1129 09:02:12.223024  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	I1129 09:02:13.224090  493486 node_ready.go:49] node "old-k8s-version-295154" is "Ready"
	I1129 09:02:13.224128  493486 node_ready.go:38] duration metric: took 14.504358398s for node "old-k8s-version-295154" to be "Ready" ...
	I1129 09:02:13.224148  493486 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:02:13.224211  493486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:02:13.243313  493486 api_server.go:72] duration metric: took 14.93733902s to wait for apiserver process to appear ...
	I1129 09:02:13.243343  493486 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:02:13.243370  493486 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:02:13.250694  493486 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:02:13.251984  493486 api_server.go:141] control plane version: v1.28.0
	I1129 09:02:13.252015  493486 api_server.go:131] duration metric: took 8.663278ms to wait for apiserver health ...
	I1129 09:02:13.252026  493486 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:02:13.255767  493486 system_pods.go:59] 8 kube-system pods found
	I1129 09:02:13.255813  493486 system_pods.go:61] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.255822  493486 system_pods.go:61] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.255829  493486 system_pods.go:61] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.255835  493486 system_pods.go:61] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.255841  493486 system_pods.go:61] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.255847  493486 system_pods.go:61] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.255853  493486 system_pods.go:61] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.255860  493486 system_pods.go:61] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.255869  493486 system_pods.go:74] duration metric: took 3.834915ms to wait for pod list to return data ...
	I1129 09:02:13.255879  493486 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:02:13.259936  493486 default_sa.go:45] found service account: "default"
	I1129 09:02:13.259965  493486 default_sa.go:55] duration metric: took 4.078247ms for default service account to be created ...
	I1129 09:02:13.259977  493486 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:02:13.264489  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:13.264528  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.264536  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.264545  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.264554  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.264562  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.264567  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.264572  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.264586  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.264615  493486 retry.go:31] will retry after 309.906184ms: missing components: kube-dns
	W1129 09:02:10.832100  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	W1129 09:02:13.330706  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:10.584596  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:10.585082  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:10.585139  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:10.585192  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:10.615813  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:10.615833  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:10.615837  460401 cri.go:89] found id: ""
	I1129 09:02:10.615846  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:10.615910  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.621079  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.625927  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:10.626017  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:10.655780  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:10.655808  460401 cri.go:89] found id: ""
	I1129 09:02:10.655817  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:10.655877  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.661197  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:10.661278  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:10.692401  460401 cri.go:89] found id: ""
	I1129 09:02:10.692423  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.692431  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:10.692436  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:10.692496  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:10.721278  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:10.721303  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:10.721309  460401 cri.go:89] found id: ""
	I1129 09:02:10.721320  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:10.721387  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.726913  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.731556  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:10.731637  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:10.759345  460401 cri.go:89] found id: ""
	I1129 09:02:10.759373  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.759381  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:10.759386  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:10.759446  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:10.790190  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:10.790215  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:10.790221  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:10.790226  460401 cri.go:89] found id: ""
	I1129 09:02:10.790236  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:10.790305  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.795588  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.800622  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.805263  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:10.805338  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:10.834942  460401 cri.go:89] found id: ""
	I1129 09:02:10.834973  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.834991  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:10.834999  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:10.835065  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:10.872503  460401 cri.go:89] found id: ""
	I1129 09:02:10.872536  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.872547  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:10.872562  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:10.872586  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:10.926644  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:10.926681  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:10.965025  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:10.965069  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:10.998068  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:10.998102  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:11.043686  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:11.043743  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:11.134380  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:11.134422  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:11.150475  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:11.150510  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:11.210329  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:11.210348  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:11.210364  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:11.250422  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:11.250457  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:11.280219  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:11.280255  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:11.315565  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:11.315596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:11.349327  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:11.349358  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:11.384696  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:11.384729  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:13.923850  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:13.924341  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:13.924398  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:13.924461  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:13.954410  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:13.954430  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:13.954434  460401 cri.go:89] found id: ""
	I1129 09:02:13.954442  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:13.954501  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:13.959624  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:13.964312  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:13.964377  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:13.992596  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:13.992625  460401 cri.go:89] found id: ""
	I1129 09:02:13.992636  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:13.992703  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:13.998893  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:13.998972  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:14.028106  460401 cri.go:89] found id: ""
	I1129 09:02:14.028140  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.028152  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:14.028161  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:14.028230  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:14.057393  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:14.057414  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:14.057418  460401 cri.go:89] found id: ""
	I1129 09:02:14.057427  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:14.057482  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.062623  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.067579  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:14.067654  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:14.102801  460401 cri.go:89] found id: ""
	I1129 09:02:14.102840  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.102853  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:14.102860  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:14.102925  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:14.135951  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:14.135979  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:14.135985  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:14.135988  460401 cri.go:89] found id: ""
	I1129 09:02:14.135998  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:14.136064  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.141983  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.147316  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.152463  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:14.152555  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:14.181365  460401 cri.go:89] found id: ""
	I1129 09:02:14.181398  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.181409  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:14.181417  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:14.181477  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:14.210267  460401 cri.go:89] found id: ""
	I1129 09:02:14.210292  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.210300  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:14.210310  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:14.210323  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:14.298625  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:14.298662  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:14.315504  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:14.315529  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:14.357098  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:14.357134  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:14.407082  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:14.407133  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:14.441442  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:14.441482  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:14.476419  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:14.476452  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:13.579150  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:13.579183  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.579189  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.579195  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.579199  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.579203  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.579206  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.579210  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.579220  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.579237  493486 retry.go:31] will retry after 360.039109ms: missing components: kube-dns
	I1129 09:02:13.944039  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:13.944084  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.944094  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.944104  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.944110  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.944116  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.944121  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.944127  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.944133  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.944166  493486 retry.go:31] will retry after 339.658127ms: missing components: kube-dns
	I1129 09:02:14.288499  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:14.288533  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Running
	I1129 09:02:14.288543  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:14.288548  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:14.288553  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:14.288563  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:14.288568  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:14.288573  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:14.288578  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Running
	I1129 09:02:14.288588  493486 system_pods.go:126] duration metric: took 1.028603527s to wait for k8s-apps to be running ...
	I1129 09:02:14.288601  493486 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:02:14.288662  493486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:02:14.302535  493486 system_svc.go:56] duration metric: took 13.922382ms WaitForService to wait for kubelet
	I1129 09:02:14.302570  493486 kubeadm.go:587] duration metric: took 15.996603485s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:02:14.302594  493486 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:02:14.305508  493486 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:02:14.305535  493486 node_conditions.go:123] node cpu capacity is 8
	I1129 09:02:14.305552  493486 node_conditions.go:105] duration metric: took 2.953214ms to run NodePressure ...
	I1129 09:02:14.305564  493486 start.go:242] waiting for startup goroutines ...
	I1129 09:02:14.305570  493486 start.go:247] waiting for cluster config update ...
	I1129 09:02:14.305583  493486 start.go:256] writing updated cluster config ...
	I1129 09:02:14.305887  493486 ssh_runner.go:195] Run: rm -f paused
	I1129 09:02:14.309803  493486 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:14.314558  493486 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-phw28" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.319446  493486 pod_ready.go:94] pod "coredns-5dd5756b68-phw28" is "Ready"
	I1129 09:02:14.319479  493486 pod_ready.go:86] duration metric: took 4.889509ms for pod "coredns-5dd5756b68-phw28" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.322499  493486 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.326608  493486 pod_ready.go:94] pod "etcd-old-k8s-version-295154" is "Ready"
	I1129 09:02:14.326631  493486 pod_ready.go:86] duration metric: took 4.109693ms for pod "etcd-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.329352  493486 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.333844  493486 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-295154" is "Ready"
	I1129 09:02:14.333867  493486 pod_ready.go:86] duration metric: took 4.49688ms for pod "kube-apiserver-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.336686  493486 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.714439  493486 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-295154" is "Ready"
	I1129 09:02:14.714472  493486 pod_ready.go:86] duration metric: took 377.765984ms for pod "kube-controller-manager-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.915822  493486 pod_ready.go:83] waiting for pod "kube-proxy-4rfb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.314552  493486 pod_ready.go:94] pod "kube-proxy-4rfb4" is "Ready"
	I1129 09:02:15.314586  493486 pod_ready.go:86] duration metric: took 398.736001ms for pod "kube-proxy-4rfb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.515989  493486 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.913869  493486 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-295154" is "Ready"
	I1129 09:02:15.913896  493486 pod_ready.go:86] duration metric: took 397.877691ms for pod "kube-scheduler-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.913908  493486 pod_ready.go:40] duration metric: took 1.604073956s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:15.959941  493486 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1129 09:02:15.961883  493486 out.go:203] 
	W1129 09:02:15.963183  493486 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1129 09:02:15.964449  493486 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1129 09:02:15.966035  493486 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-295154" cluster and "default" namespace by default
	W1129 09:02:15.330798  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	W1129 09:02:17.331851  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:14.509454  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:14.509484  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:14.571273  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:14.571298  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:14.571312  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:14.605440  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:14.605476  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:14.642678  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:14.642712  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:14.671483  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:14.671514  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:14.701619  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:14.701647  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:17.246912  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:17.247337  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:17.247422  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:17.247479  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:17.277610  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:17.277632  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:17.277637  460401 cri.go:89] found id: ""
	I1129 09:02:17.277647  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:17.277711  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.283531  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.288554  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:17.288644  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:17.316819  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:17.316847  460401 cri.go:89] found id: ""
	I1129 09:02:17.316857  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:17.316921  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.322640  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:17.322770  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:17.353531  460401 cri.go:89] found id: ""
	I1129 09:02:17.353563  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.353575  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:17.353585  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:17.353651  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:17.384830  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:17.384854  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:17.384858  460401 cri.go:89] found id: ""
	I1129 09:02:17.384867  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:17.384932  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.390132  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.395096  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:17.395177  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:17.425643  460401 cri.go:89] found id: ""
	I1129 09:02:17.425681  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.425692  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:17.425704  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:17.425788  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:17.456077  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:17.456105  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:17.456113  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:17.456136  460401 cri.go:89] found id: ""
	I1129 09:02:17.456148  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:17.456213  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.461610  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.466727  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.471762  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:17.471849  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:17.501750  460401 cri.go:89] found id: ""
	I1129 09:02:17.501782  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.501793  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:17.501801  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:17.501868  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:17.531903  460401 cri.go:89] found id: ""
	I1129 09:02:17.531932  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.531942  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:17.531956  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:17.531972  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:17.630517  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:17.630566  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:17.667169  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:17.667205  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:17.707311  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:17.707360  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:17.746580  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:17.746621  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:17.799162  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:17.799207  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:17.839313  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:17.839355  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:17.872700  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:17.872742  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:17.904806  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:17.904838  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:17.920866  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:17.920904  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:17.983002  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:17.983027  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:17.983040  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:18.019203  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:18.019241  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:18.070893  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:18.070936  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1129 09:02:19.830479  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:20.833313  494126 node_ready.go:49] node "no-preload-924441" is "Ready"
	I1129 09:02:20.833355  494126 node_ready.go:38] duration metric: took 14.505431475s for node "no-preload-924441" to be "Ready" ...
	I1129 09:02:20.833377  494126 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:02:20.833445  494126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:02:20.850134  494126 api_server.go:72] duration metric: took 14.795523765s to wait for apiserver process to appear ...
	I1129 09:02:20.850165  494126 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:02:20.850190  494126 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1129 09:02:20.856514  494126 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1129 09:02:20.857900  494126 api_server.go:141] control plane version: v1.34.1
	I1129 09:02:20.857933  494126 api_server.go:131] duration metric: took 7.759312ms to wait for apiserver health ...
	I1129 09:02:20.857945  494126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:02:20.861811  494126 system_pods.go:59] 8 kube-system pods found
	I1129 09:02:20.861851  494126 system_pods.go:61] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:20.861863  494126 system_pods.go:61] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:20.861871  494126 system_pods.go:61] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:20.861877  494126 system_pods.go:61] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:20.861892  494126 system_pods.go:61] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:20.861897  494126 system_pods.go:61] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:20.861902  494126 system_pods.go:61] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:20.861912  494126 system_pods.go:61] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:20.861920  494126 system_pods.go:74] duration metric: took 3.967151ms to wait for pod list to return data ...
	I1129 09:02:20.861931  494126 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:02:20.864542  494126 default_sa.go:45] found service account: "default"
	I1129 09:02:20.864569  494126 default_sa.go:55] duration metric: took 2.631761ms for default service account to be created ...
	I1129 09:02:20.864581  494126 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:02:20.867876  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:20.867913  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:20.867924  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:20.867932  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:20.867938  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:20.867999  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:20.868005  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:20.868011  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:20.868027  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:20.868077  494126 retry.go:31] will retry after 292.54579ms: missing components: kube-dns
	I1129 09:02:21.165357  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:21.165399  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:21.165408  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:21.165416  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:21.165422  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:21.165428  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:21.165434  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:21.165439  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:21.165449  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:21.165470  494126 retry.go:31] will retry after 336.406198ms: missing components: kube-dns
	I1129 09:02:21.505471  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:21.505510  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:21.505516  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:21.505524  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:21.505528  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:21.505531  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:21.505534  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:21.505538  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:21.505542  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:21.505560  494126 retry.go:31] will retry after 447.535618ms: missing components: kube-dns
	I1129 09:02:21.957409  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:21.957439  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Running
	I1129 09:02:21.957444  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:21.957448  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:21.957451  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:21.957456  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:21.957459  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:21.957464  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:21.957467  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Running
	I1129 09:02:21.957476  494126 system_pods.go:126] duration metric: took 1.092887723s to wait for k8s-apps to be running ...
	I1129 09:02:21.957498  494126 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:02:21.957549  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:02:21.971582  494126 system_svc.go:56] duration metric: took 14.071974ms WaitForService to wait for kubelet
	I1129 09:02:21.971613  494126 kubeadm.go:587] duration metric: took 15.917009838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:02:21.971632  494126 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:02:21.974426  494126 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:02:21.974453  494126 node_conditions.go:123] node cpu capacity is 8
	I1129 09:02:21.974471  494126 node_conditions.go:105] duration metric: took 2.83418ms to run NodePressure ...
	I1129 09:02:21.974485  494126 start.go:242] waiting for startup goroutines ...
	I1129 09:02:21.974492  494126 start.go:247] waiting for cluster config update ...
	I1129 09:02:21.974502  494126 start.go:256] writing updated cluster config ...
	I1129 09:02:21.974780  494126 ssh_runner.go:195] Run: rm -f paused
	I1129 09:02:21.978967  494126 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:21.982434  494126 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nsh8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.986370  494126 pod_ready.go:94] pod "coredns-66bc5c9577-nsh8w" is "Ready"
	I1129 09:02:21.986395  494126 pod_ready.go:86] duration metric: took 3.939701ms for pod "coredns-66bc5c9577-nsh8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.988365  494126 pod_ready.go:83] waiting for pod "etcd-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.991850  494126 pod_ready.go:94] pod "etcd-no-preload-924441" is "Ready"
	I1129 09:02:21.991874  494126 pod_ready.go:86] duration metric: took 3.486388ms for pod "etcd-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.993587  494126 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.997072  494126 pod_ready.go:94] pod "kube-apiserver-no-preload-924441" is "Ready"
	I1129 09:02:21.997092  494126 pod_ready.go:86] duration metric: took 3.484304ms for pod "kube-apiserver-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.998698  494126 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:22.382918  494126 pod_ready.go:94] pod "kube-controller-manager-no-preload-924441" is "Ready"
	I1129 09:02:22.382948  494126 pod_ready.go:86] duration metric: took 384.232783ms for pod "kube-controller-manager-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:22.583125  494126 pod_ready.go:83] waiting for pod "kube-proxy-96fcg" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:22.982608  494126 pod_ready.go:94] pod "kube-proxy-96fcg" is "Ready"
	I1129 09:02:22.982639  494126 pod_ready.go:86] duration metric: took 399.48383ms for pod "kube-proxy-96fcg" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:23.184031  494126 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:23.583027  494126 pod_ready.go:94] pod "kube-scheduler-no-preload-924441" is "Ready"
	I1129 09:02:23.583058  494126 pod_ready.go:86] duration metric: took 399.00134ms for pod "kube-scheduler-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:23.583071  494126 pod_ready.go:40] duration metric: took 1.604064431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:23.632822  494126 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:02:23.634677  494126 out.go:179] * Done! kubectl is now configured to use "no-preload-924441" cluster and "default" namespace by default
	I1129 09:02:20.607959  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:20.608406  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:20.608469  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:20.608531  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:20.639116  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:20.639148  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:20.639155  460401 cri.go:89] found id: ""
	I1129 09:02:20.639168  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:20.639240  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.644749  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.649347  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:20.649411  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:20.677383  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:20.677404  460401 cri.go:89] found id: ""
	I1129 09:02:20.677413  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:20.677466  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.682625  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:20.682708  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:20.711021  460401 cri.go:89] found id: ""
	I1129 09:02:20.711050  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.711060  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:20.711070  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:20.711138  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:20.745598  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:20.745626  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:20.745632  460401 cri.go:89] found id: ""
	I1129 09:02:20.745643  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:20.745716  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.751838  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.757804  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:20.757881  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:20.793640  460401 cri.go:89] found id: ""
	I1129 09:02:20.793671  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.793683  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:20.793691  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:20.793792  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:20.830071  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:20.830099  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:20.830104  460401 cri.go:89] found id: ""
	I1129 09:02:20.830114  460401 logs.go:282] 2 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:20.830179  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.837576  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.843146  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:20.843225  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:20.883480  460401 cri.go:89] found id: ""
	I1129 09:02:20.883525  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.883536  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:20.883543  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:20.883598  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:20.923499  460401 cri.go:89] found id: ""
	I1129 09:02:20.923532  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.923543  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:20.923557  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:20.923574  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:20.961675  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:20.961713  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:20.996489  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:20.996524  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:21.046535  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:21.046596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:21.131239  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:21.131286  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:21.192537  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:21.192557  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:21.192573  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:21.227894  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:21.227932  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:21.262592  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:21.262632  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:21.298034  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:21.298076  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:21.313593  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:21.313626  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:21.355840  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:21.355878  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:21.409528  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:21.409570  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:23.946261  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:23.946794  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:23.946872  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:23.946940  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:23.978496  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:23.978521  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:23.978525  460401 cri.go:89] found id: ""
	I1129 09:02:23.978533  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:23.978585  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:23.983820  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:23.988502  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:23.988563  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:24.017479  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:24.017505  460401 cri.go:89] found id: ""
	I1129 09:02:24.017516  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:24.017581  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.022978  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:24.023049  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:24.054017  460401 cri.go:89] found id: ""
	I1129 09:02:24.054042  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.054049  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:24.054055  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:24.054104  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:24.083682  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:24.083704  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:24.083710  460401 cri.go:89] found id: ""
	I1129 09:02:24.083720  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:24.083797  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.089191  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.094144  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:24.094223  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:24.123931  460401 cri.go:89] found id: ""
	I1129 09:02:24.123956  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.123964  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:24.123972  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:24.124032  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:24.158678  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:24.158704  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:24.158710  460401 cri.go:89] found id: ""
	I1129 09:02:24.158721  460401 logs.go:282] 2 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:24.158824  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.164380  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.170117  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:24.170196  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:24.202016  460401 cri.go:89] found id: ""
	I1129 09:02:24.202057  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.202066  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:24.202072  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:24.202123  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:24.235359  460401 cri.go:89] found id: ""
	I1129 09:02:24.235388  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.235399  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:24.235412  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:24.235427  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:24.327121  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:24.327167  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:24.380608  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:24.380651  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:24.411895  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:24.411923  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:24.450543  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:24.450575  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:24.500105  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:24.500146  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	64dcae39f0e63       56cc512116c8f       9 seconds ago       Running             busybox                   0                   c3b03930e2672       busybox                                          default
	84eb7f692c990       ead0a4a53df89       15 seconds ago      Running             coredns                   0                   46a4885d817e8       coredns-5dd5756b68-phw28                         kube-system
	c2b64aca34f8b       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   f0e9f57ece0e7       storage-provisioner                              kube-system
	c556471fd7ebd       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   c9cb87dbe2bae       kindnet-k4n9l                                    kube-system
	c3eb6059b5593       ea1030da44aa1       29 seconds ago      Running             kube-proxy                0                   d9056ddc2e968       kube-proxy-4rfb4                                 kube-system
	ec1e8ae808249       f6f496300a2ae       47 seconds ago      Running             kube-scheduler            0                   7caf413f5769e       kube-scheduler-old-k8s-version-295154            kube-system
	b3d9ef849b109       4be79c38a4bab       47 seconds ago      Running             kube-controller-manager   0                   f845d639a6e89       kube-controller-manager-old-k8s-version-295154   kube-system
	e534f6de34cb5       73deb9a3f7025       47 seconds ago      Running             etcd                      0                   83b4224fe982d       etcd-old-k8s-version-295154                      kube-system
	c912b0431f5b9       bb5e0dde9054c       47 seconds ago      Running             kube-apiserver            0                   c5ef1020ba416       kube-apiserver-old-k8s-version-295154            kube-system
	
	
	==> containerd <==
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.171284629Z" level=info msg="CreateContainer within sandbox \"f0e9f57ece0e7298ea8ff52e824c152b0a198734fa271e11f9da85ab94980def\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"c2b64aca34f8b72337fd1dd9bda969ab607f739b3b5bd64a9962706bb51f1368\""
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.171952045Z" level=info msg="StartContainer for \"c2b64aca34f8b72337fd1dd9bda969ab607f739b3b5bd64a9962706bb51f1368\""
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.173213037Z" level=info msg="connecting to shim c2b64aca34f8b72337fd1dd9bda969ab607f739b3b5bd64a9962706bb51f1368" address="unix:///run/containerd/s/dc122ba824fb2ecb94628ad2391429e4d2b98c17ac396814c4a25b4d93b141fe" protocol=ttrpc version=3
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.175196491Z" level=info msg="CreateContainer within sandbox \"46a4885d817e84fab45e9ad70e7c335ccc0f307e19f484641f3f563e19a3b305\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"84eb7f692c99059489020b59b47c169ecc9d4286a2bf7a532dae7f5d13e68795\""
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.175823701Z" level=info msg="StartContainer for \"84eb7f692c99059489020b59b47c169ecc9d4286a2bf7a532dae7f5d13e68795\""
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.176634429Z" level=info msg="connecting to shim 84eb7f692c99059489020b59b47c169ecc9d4286a2bf7a532dae7f5d13e68795" address="unix:///run/containerd/s/950489f09bce35a172bb4082bad530c176c650052c0ffe9dab18daf70ee3f021" protocol=ttrpc version=3
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.230846483Z" level=info msg="StartContainer for \"c2b64aca34f8b72337fd1dd9bda969ab607f739b3b5bd64a9962706bb51f1368\" returns successfully"
	Nov 29 09:02:13 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:13.234243145Z" level=info msg="StartContainer for \"84eb7f692c99059489020b59b47c169ecc9d4286a2bf7a532dae7f5d13e68795\" returns successfully"
	Nov 29 09:02:16 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:16.439586027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:54baf2f4-8de5-4f66-92ac-f5315174d940,Namespace:default,Attempt:0,}"
	Nov 29 09:02:16 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:16.482219935Z" level=info msg="connecting to shim c3b03930e26728c610c785b965715fd3b553dfa8fa71b6e35bcc2370b534d413" address="unix:///run/containerd/s/705109ebb456d589bcc59459487d5f036c6a54c53bc3e7a7b9f9e1b41d8f56cc" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:02:16 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:16.554186463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:54baf2f4-8de5-4f66-92ac-f5315174d940,Namespace:default,Attempt:0,} returns sandbox id \"c3b03930e26728c610c785b965715fd3b553dfa8fa71b6e35bcc2370b534d413\""
	Nov 29 09:02:16 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:16.556162494Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.188092236Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.188755127Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396643"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.190108938Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.192089044Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.192508223Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.636298875s"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.192553605Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.194479178Z" level=info msg="CreateContainer within sandbox \"c3b03930e26728c610c785b965715fd3b553dfa8fa71b6e35bcc2370b534d413\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.201487714Z" level=info msg="Container 64dcae39f0e638d4b6c6e188a3cb9da7d32231fa3ff9ad25ba54b2c00601f705: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.207643963Z" level=info msg="CreateContainer within sandbox \"c3b03930e26728c610c785b965715fd3b553dfa8fa71b6e35bcc2370b534d413\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"64dcae39f0e638d4b6c6e188a3cb9da7d32231fa3ff9ad25ba54b2c00601f705\""
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.208357251Z" level=info msg="StartContainer for \"64dcae39f0e638d4b6c6e188a3cb9da7d32231fa3ff9ad25ba54b2c00601f705\""
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.209198742Z" level=info msg="connecting to shim 64dcae39f0e638d4b6c6e188a3cb9da7d32231fa3ff9ad25ba54b2c00601f705" address="unix:///run/containerd/s/705109ebb456d589bcc59459487d5f036c6a54c53bc3e7a7b9f9e1b41d8f56cc" protocol=ttrpc version=3
	Nov 29 09:02:19 old-k8s-version-295154 containerd[663]: time="2025-11-29T09:02:19.268677673Z" level=info msg="StartContainer for \"64dcae39f0e638d4b6c6e188a3cb9da7d32231fa3ff9ad25ba54b2c00601f705\" returns successfully"
	Nov 29 09:02:25 old-k8s-version-295154 containerd[663]: E1129 09:02:25.213853     663 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [84eb7f692c99059489020b59b47c169ecc9d4286a2bf7a532dae7f5d13e68795] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46306 - 2219 "HINFO IN 2134159150006616805.6033665223682648056. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036424572s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-295154
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-295154
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=old-k8s-version-295154
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_01_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:01:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-295154
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:02:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:02:16 +0000   Sat, 29 Nov 2025 09:01:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:02:16 +0000   Sat, 29 Nov 2025 09:01:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:02:16 +0000   Sat, 29 Nov 2025 09:01:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:02:16 +0000   Sat, 29 Nov 2025 09:02:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-295154
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                22b437c1-66e6-4b41-85ab-28edf17772d8
	  Boot ID:                    b81dce2f-73d5-4349-b473-aa1210058cb8
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-phw28                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-old-k8s-version-295154                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-k4n9l                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-295154             250m (3%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-295154    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-4rfb4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-295154             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 43s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  42s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s   kubelet          Node old-k8s-version-295154 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s   kubelet          Node old-k8s-version-295154 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s   kubelet          Node old-k8s-version-295154 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node old-k8s-version-295154 event: Registered Node old-k8s-version-295154 in Controller
	  Normal  NodeReady                16s   kubelet          Node old-k8s-version-295154 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 07:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001881] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084003] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.378167] i8042: Warning: Keylock active
	[  +0.012106] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.460417] block sda: the capability attribute has been deprecated.
	[  +0.079627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021012] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.285522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [e534f6de34cb59a48842df5c90bc3db11dfa608b2f5ab4df9fd455d5a0bc5f86] <==
	{"level":"info","ts":"2025-11-29T09:01:40.832264Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"ea7e25599daad906","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-11-29T09:01:40.833809Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-29T09:01:40.834831Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-29T09:01:40.835134Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-29T09:01:40.835187Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-29T09:01:40.835365Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T09:01:40.835454Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-29T09:01:41.123873Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-29T09:01:41.123935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-29T09:01:41.123975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-29T09:01:41.123993Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-29T09:01:41.124004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-29T09:01:41.124048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-29T09:01:41.124063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-29T09:01:41.125302Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:01:41.125326Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-29T09:01:41.125372Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:01:41.125276Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-295154 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-29T09:01:41.126456Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:01:41.126541Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:01:41.126567Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-29T09:01:41.126779Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-29T09:01:41.127083Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-29T09:01:41.127112Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-29T09:01:41.126728Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:02:28 up  1:44,  0 user,  load average: 2.64, 2.82, 12.39
	Linux old-k8s-version-295154 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c556471fd7ebd161ba2d7b8d6bae271ee70e193598e07a1f28e7e4edb21ff0ac] <==
	I1129 09:02:02.479657       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:02:02.479993       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:02:02.480115       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:02:02.480129       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:02:02.480148       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:02:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:02:02.682312       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:02:02.682392       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:02:02.682406       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:02:02.682562       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:02:03.155518       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:02:03.155556       1 metrics.go:72] Registering metrics
	I1129 09:02:03.155642       1 controller.go:711] "Syncing nftables rules"
	I1129 09:02:12.691133       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:02:12.691191       1 main.go:301] handling current node
	I1129 09:02:22.684230       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:02:22.684264       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c912b0431f5b96b6ae8d3df9e39af5a731f5b6f4a3128fbae403427258cd4010] <==
	I1129 09:01:42.628432       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1129 09:01:42.628473       1 aggregator.go:166] initial CRD sync complete...
	I1129 09:01:42.628487       1 autoregister_controller.go:141] Starting autoregister controller
	I1129 09:01:42.628498       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 09:01:42.628507       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:01:42.630276       1 controller.go:624] quota admission added evaluator for: namespaces
	I1129 09:01:42.631842       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1129 09:01:42.632653       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:01:42.633160       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1129 09:01:42.675946       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:01:43.534299       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:01:43.538893       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:01:43.538914       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:01:44.048669       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:01:44.089332       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:01:44.139778       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:01:44.147964       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1129 09:01:44.149152       1 controller.go:624] quota admission added evaluator for: endpoints
	I1129 09:01:44.153475       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:01:44.583851       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1129 09:01:45.899683       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1129 09:01:45.911834       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:01:45.923913       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1129 09:01:58.190396       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1129 09:01:58.345309       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b3d9ef849b10991879886d480043efb13728841f71afc04d4c57f7bef3ceffc8] <==
	I1129 09:01:57.601489       1 shared_informer.go:318] Caches are synced for HPA
	I1129 09:01:57.641964       1 shared_informer.go:318] Caches are synced for resource quota
	I1129 09:01:57.693466       1 shared_informer.go:318] Caches are synced for resource quota
	I1129 09:01:58.013319       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:01:58.081463       1 shared_informer.go:318] Caches are synced for garbage collector
	I1129 09:01:58.081502       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1129 09:01:58.201293       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k4n9l"
	I1129 09:01:58.203642       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4rfb4"
	I1129 09:01:58.351467       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1129 09:01:58.446469       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rjd8l"
	I1129 09:01:58.457821       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-phw28"
	I1129 09:01:58.472248       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.660505ms"
	I1129 09:01:58.490138       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.818584ms"
	I1129 09:01:58.490294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.203µs"
	I1129 09:01:58.749707       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1129 09:01:58.764048       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-rjd8l"
	I1129 09:01:58.771830       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.493664ms"
	I1129 09:01:58.778438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.545401ms"
	I1129 09:01:58.778711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.414µs"
	I1129 09:02:12.741856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="137.043µs"
	I1129 09:02:12.755154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="122.723µs"
	I1129 09:02:14.089302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="163.286µs"
	I1129 09:02:14.110178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.287126ms"
	I1129 09:02:14.110300       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.729µs"
	I1129 09:02:17.447692       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [c3eb6059b5593e42d8e9ac6b43ac8b87e944eac5747f993c6bbca2acc16f180b] <==
	I1129 09:01:58.837203       1 server_others.go:69] "Using iptables proxy"
	I1129 09:01:58.847060       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1129 09:01:58.872286       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:01:58.874956       1 server_others.go:152] "Using iptables Proxier"
	I1129 09:01:58.875022       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1129 09:01:58.875038       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1129 09:01:58.875085       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1129 09:01:58.875423       1 server.go:846] "Version info" version="v1.28.0"
	I1129 09:01:58.875446       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:01:58.877361       1 config.go:188] "Starting service config controller"
	I1129 09:01:58.877426       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1129 09:01:58.878055       1 config.go:97] "Starting endpoint slice config controller"
	I1129 09:01:58.878080       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1129 09:01:58.878567       1 config.go:315] "Starting node config controller"
	I1129 09:01:58.878812       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1129 09:01:58.977719       1 shared_informer.go:318] Caches are synced for service config
	I1129 09:01:58.978897       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1129 09:01:58.979002       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ec1e8ae808249468b5a57a4c1aa02a0700a8af9e46e3b394b96fda393ef3531b] <==
	E1129 09:01:42.591266       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1129 09:01:42.591281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1129 09:01:43.438322       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1129 09:01:43.438354       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1129 09:01:43.459244       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1129 09:01:43.459274       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1129 09:01:43.466076       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1129 09:01:43.466111       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1129 09:01:43.467104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1129 09:01:43.467131       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1129 09:01:43.496506       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1129 09:01:43.496554       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1129 09:01:43.745308       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1129 09:01:43.745358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1129 09:01:43.782232       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1129 09:01:43.782279       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1129 09:01:43.784711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1129 09:01:43.784785       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1129 09:01:43.822287       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1129 09:01:43.822413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1129 09:01:43.831935       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1129 09:01:43.831979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1129 09:01:44.009190       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1129 09:01:44.009227       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1129 09:01:46.586725       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 29 09:01:57 old-k8s-version-295154 kubelet[1505]: I1129 09:01:57.557701    1505 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.211770    1505 topology_manager.go:215] "Topology Admit Handler" podUID="74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8" podNamespace="kube-system" podName="kindnet-k4n9l"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.211977    1505 topology_manager.go:215] "Topology Admit Handler" podUID="05ef67c3-0d6e-453d-a0e5-81c649c3e033" podNamespace="kube-system" podName="kube-proxy-4rfb4"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245664    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvjhl\" (UniqueName: \"kubernetes.io/projected/74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8-kube-api-access-kvjhl\") pod \"kindnet-k4n9l\" (UID: \"74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8\") " pod="kube-system/kindnet-k4n9l"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245757    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8-cni-cfg\") pod \"kindnet-k4n9l\" (UID: \"74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8\") " pod="kube-system/kindnet-k4n9l"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245804    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8-lib-modules\") pod \"kindnet-k4n9l\" (UID: \"74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8\") " pod="kube-system/kindnet-k4n9l"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245867    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05ef67c3-0d6e-453d-a0e5-81c649c3e033-xtables-lock\") pod \"kube-proxy-4rfb4\" (UID: \"05ef67c3-0d6e-453d-a0e5-81c649c3e033\") " pod="kube-system/kube-proxy-4rfb4"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245918    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05ef67c3-0d6e-453d-a0e5-81c649c3e033-lib-modules\") pod \"kube-proxy-4rfb4\" (UID: \"05ef67c3-0d6e-453d-a0e5-81c649c3e033\") " pod="kube-system/kube-proxy-4rfb4"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245964    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/05ef67c3-0d6e-453d-a0e5-81c649c3e033-kube-proxy\") pod \"kube-proxy-4rfb4\" (UID: \"05ef67c3-0d6e-453d-a0e5-81c649c3e033\") " pod="kube-system/kube-proxy-4rfb4"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.245999    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8-xtables-lock\") pod \"kindnet-k4n9l\" (UID: \"74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8\") " pod="kube-system/kindnet-k4n9l"
	Nov 29 09:01:58 old-k8s-version-295154 kubelet[1505]: I1129 09:01:58.246031    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6tpd\" (UniqueName: \"kubernetes.io/projected/05ef67c3-0d6e-453d-a0e5-81c649c3e033-kube-api-access-l6tpd\") pod \"kube-proxy-4rfb4\" (UID: \"05ef67c3-0d6e-453d-a0e5-81c649c3e033\") " pod="kube-system/kube-proxy-4rfb4"
	Nov 29 09:01:59 old-k8s-version-295154 kubelet[1505]: I1129 09:01:59.051481    1505 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4rfb4" podStartSLOduration=1.051403893 podCreationTimestamp="2025-11-29 09:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:01:59.051034434 +0000 UTC m=+13.185091147" watchObservedRunningTime="2025-11-29 09:01:59.051403893 +0000 UTC m=+13.185460607"
	Nov 29 09:02:03 old-k8s-version-295154 kubelet[1505]: I1129 09:02:03.075069    1505 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-k4n9l" podStartSLOduration=1.8021440370000001 podCreationTimestamp="2025-11-29 09:01:58 +0000 UTC" firstStartedPulling="2025-11-29 09:01:58.884230342 +0000 UTC m=+13.018287046" lastFinishedPulling="2025-11-29 09:02:02.157002868 +0000 UTC m=+16.291059564" observedRunningTime="2025-11-29 09:02:03.074620988 +0000 UTC m=+17.208677701" watchObservedRunningTime="2025-11-29 09:02:03.074916555 +0000 UTC m=+17.208973271"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.718189    1505 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.741770    1505 topology_manager.go:215] "Topology Admit Handler" podUID="7fc2b8dd-43dd-43df-8887-9ffa6de36fb4" podNamespace="kube-system" podName="coredns-5dd5756b68-phw28"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.742156    1505 topology_manager.go:215] "Topology Admit Handler" podUID="359871fd-a77c-430a-87c1-b313992718e2" podNamespace="kube-system" podName="storage-provisioner"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.838446    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sztkn\" (UniqueName: \"kubernetes.io/projected/7fc2b8dd-43dd-43df-8887-9ffa6de36fb4-kube-api-access-sztkn\") pod \"coredns-5dd5756b68-phw28\" (UID: \"7fc2b8dd-43dd-43df-8887-9ffa6de36fb4\") " pod="kube-system/coredns-5dd5756b68-phw28"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.838527    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghrm\" (UniqueName: \"kubernetes.io/projected/359871fd-a77c-430a-87c1-b313992718e2-kube-api-access-2ghrm\") pod \"storage-provisioner\" (UID: \"359871fd-a77c-430a-87c1-b313992718e2\") " pod="kube-system/storage-provisioner"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.838708    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7fc2b8dd-43dd-43df-8887-9ffa6de36fb4-config-volume\") pod \"coredns-5dd5756b68-phw28\" (UID: \"7fc2b8dd-43dd-43df-8887-9ffa6de36fb4\") " pod="kube-system/coredns-5dd5756b68-phw28"
	Nov 29 09:02:12 old-k8s-version-295154 kubelet[1505]: I1129 09:02:12.838811    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/359871fd-a77c-430a-87c1-b313992718e2-tmp\") pod \"storage-provisioner\" (UID: \"359871fd-a77c-430a-87c1-b313992718e2\") " pod="kube-system/storage-provisioner"
	Nov 29 09:02:14 old-k8s-version-295154 kubelet[1505]: I1129 09:02:14.089000    1505 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-phw28" podStartSLOduration=16.088943107 podCreationTimestamp="2025-11-29 09:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:14.088869179 +0000 UTC m=+28.222925894" watchObservedRunningTime="2025-11-29 09:02:14.088943107 +0000 UTC m=+28.222999821"
	Nov 29 09:02:14 old-k8s-version-295154 kubelet[1505]: I1129 09:02:14.111723    1505 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.111665904 podCreationTimestamp="2025-11-29 09:01:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:14.111613929 +0000 UTC m=+28.245670654" watchObservedRunningTime="2025-11-29 09:02:14.111665904 +0000 UTC m=+28.245722610"
	Nov 29 09:02:16 old-k8s-version-295154 kubelet[1505]: I1129 09:02:16.130277    1505 topology_manager.go:215] "Topology Admit Handler" podUID="54baf2f4-8de5-4f66-92ac-f5315174d940" podNamespace="default" podName="busybox"
	Nov 29 09:02:16 old-k8s-version-295154 kubelet[1505]: I1129 09:02:16.160532    1505 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj46k\" (UniqueName: \"kubernetes.io/projected/54baf2f4-8de5-4f66-92ac-f5315174d940-kube-api-access-wj46k\") pod \"busybox\" (UID: \"54baf2f4-8de5-4f66-92ac-f5315174d940\") " pod="default/busybox"
	Nov 29 09:02:20 old-k8s-version-295154 kubelet[1505]: I1129 09:02:20.102644    1505 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.465512975 podCreationTimestamp="2025-11-29 09:02:16 +0000 UTC" firstStartedPulling="2025-11-29 09:02:16.555803596 +0000 UTC m=+30.689860305" lastFinishedPulling="2025-11-29 09:02:19.192874383 +0000 UTC m=+33.326931083" observedRunningTime="2025-11-29 09:02:20.102453338 +0000 UTC m=+34.236510058" watchObservedRunningTime="2025-11-29 09:02:20.102583753 +0000 UTC m=+34.236640469"
	
	
	==> storage-provisioner [c2b64aca34f8b72337fd1dd9bda969ab607f739b3b5bd64a9962706bb51f1368] <==
	I1129 09:02:13.242146       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:02:13.250320       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:02:13.250375       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1129 09:02:13.260646       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:02:13.260835       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3d38b917-49d9-4ce8-b6d4-33e78e4354a6", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-295154_6170b45d-8612-41e5-bb3d-e5fe156c196d became leader
	I1129 09:02:13.260885       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-295154_6170b45d-8612-41e5-bb3d-e5fe156c196d!
	I1129 09:02:13.362157       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-295154_6170b45d-8612-41e5-bb3d-e5fe156c196d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-295154 -n old-k8s-version-295154
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-295154 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (13.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-924441 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [26d445de-fc0e-4bc8-adac-935cd86ee75c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [26d445de-fc0e-4bc8-adac-935cd86ee75c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003036049s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-924441 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-924441
helpers_test.go:243: (dbg) docker inspect no-preload-924441:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e",
	        "Created": "2025-11-29T09:01:32.925843748Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 495044,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:01:32.964068054Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e/hosts",
	        "LogPath": "/var/lib/docker/containers/a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e/a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e-json.log",
	        "Name": "/no-preload-924441",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-924441:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-924441",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e",
	                "LowerDir": "/var/lib/docker/overlay2/bf084be51e328d85d7140d3bad32d403cc9913fc552c9ca7103255f4bb584fbf-init/diff:/var/lib/docker/overlay2/eb180691bce18b8d981b2d61ed0962851c615364ed77c18ff66d559424569005/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf084be51e328d85d7140d3bad32d403cc9913fc552c9ca7103255f4bb584fbf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf084be51e328d85d7140d3bad32d403cc9913fc552c9ca7103255f4bb584fbf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf084be51e328d85d7140d3bad32d403cc9913fc552c9ca7103255f4bb584fbf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-924441",
	                "Source": "/var/lib/docker/volumes/no-preload-924441/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-924441",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-924441",
	                "name.minikube.sigs.k8s.io": "no-preload-924441",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "47b2f0630bf6412a68ffd5a9a49dd44e6a182af0bdc63a26033a455ecf9fea54",
	            "SandboxKey": "/var/run/docker/netns/47b2f0630bf6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-924441": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "01c660269bf53aee934478816016519cb57246f9bdf0fd8776b42bd6fef191ec",
	                    "EndpointID": "ff825fdc88e8e3aa38fffe8f597fbd32723bbbdc953f28e7a6730f82ccf0aad2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "4a:29:88:7e:70:ed",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-924441",
	                        "a046473c1ebd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-924441 -n no-preload-924441
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-924441 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-924441 logs -n 25: (1.123680238s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-770004 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo containerd config dump                                                                                                                                                                                                        │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo crio config                                                                                                                                                                                                                   │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ delete  │ -p cilium-770004                                                                                                                                                                                                                                    │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │ 29 Nov 25 09:00 UTC │
	│ start   │ -p force-systemd-env-693869 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-693869 │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │ 29 Nov 25 09:01 UTC │
	│ start   │ -p pause-563162 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │ 29 Nov 25 09:01 UTC │
	│ ssh     │ force-systemd-env-693869 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-693869 │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p force-systemd-env-693869                                                                                                                                                                                                                         │ force-systemd-env-693869 │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ start   │ -p cert-options-536258 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ pause   │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ unpause │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ pause   │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ ssh     │ cert-options-536258 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ ssh     │ -p cert-options-536258 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p cert-options-536258                                                                                                                                                                                                                              │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p pause-563162                                                                                                                                                                                                                                     │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ start   │ -p old-k8s-version-295154 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-295154   │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:02 UTC │
	│ start   │ -p no-preload-924441 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-924441        │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:02 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-295154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-295154   │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │ 29 Nov 25 09:02 UTC │
	│ stop    │ -p old-k8s-version-295154 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-295154   │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:01:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:01:26.371812  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:26.372231  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:26.372304  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:26.372374  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:26.406988  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:26.407016  460401 cri.go:89] found id: "40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	I1129 09:01:26.407022  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:26.407027  460401 cri.go:89] found id: ""
	I1129 09:01:26.407038  460401 logs.go:282] 3 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:26.407111  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.413707  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.419492  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.424920  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:26.424999  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:26.456369  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:26.456395  460401 cri.go:89] found id: ""
	I1129 09:01:26.456406  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:26.456466  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.462064  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:26.462133  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:26.492837  460401 cri.go:89] found id: ""
	I1129 09:01:26.492868  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.492879  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:26.492887  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:26.492955  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:26.521715  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:26.521747  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:26.521754  460401 cri.go:89] found id: ""
	I1129 09:01:26.521763  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:26.521821  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.526872  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.531295  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:26.531353  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:26.558218  460401 cri.go:89] found id: ""
	I1129 09:01:26.558248  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.558257  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:26.558264  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:26.558313  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:26.587221  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:26.587246  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:26.587253  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:26.587258  460401 cri.go:89] found id: ""
	I1129 09:01:26.587268  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:26.587328  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.591954  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.596055  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.600163  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:26.600219  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:26.628586  460401 cri.go:89] found id: ""
	I1129 09:01:26.628613  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.628624  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:26.628633  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:26.628690  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:26.657553  460401 cri.go:89] found id: ""
	I1129 09:01:26.657581  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.657591  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:26.657603  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:26.657622  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:26.721559  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:26.721584  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:26.721601  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:26.756136  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:26.756165  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:26.787789  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:26.787827  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:26.838908  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:26.838943  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:26.875689  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:26.875723  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:26.946907  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:26.946941  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:26.982883  460401 logs.go:123] Gathering logs for kube-apiserver [40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac] ...
	I1129 09:01:26.982919  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	W1129 09:01:27.012923  460401 logs.go:130] failed kube-apiserver [40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac": Process exited with status 1
	stdout:
	
	stderr:
	E1129 09:01:27.010611    2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found" containerID="40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	time="2025-11-29T09:01:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found"
	 output: 
	** stderr ** 
	E1129 09:01:27.010611    2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found" containerID="40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	time="2025-11-29T09:01:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found"
	
	** /stderr **
	I1129 09:01:27.012941  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:27.012953  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:27.051493  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:27.051526  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:27.089722  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:27.089755  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:27.138471  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:27.138504  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:27.172932  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:27.172962  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:27.207844  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:27.207878  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:29.500031  494126 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:01:29.500142  494126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:01:29.500153  494126 out.go:374] Setting ErrFile to fd 2...
	I1129 09:01:29.500159  494126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:01:29.500372  494126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 09:01:29.500882  494126 out.go:368] Setting JSON to false
	I1129 09:01:29.501996  494126 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6233,"bootTime":1764400656,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:01:29.502070  494126 start.go:143] virtualization: kvm guest
	I1129 09:01:29.506976  494126 out.go:179] * [no-preload-924441] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:01:29.508162  494126 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:01:29.508182  494126 notify.go:221] Checking for updates...
	I1129 09:01:29.510318  494126 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:01:29.511334  494126 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:01:29.516252  494126 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 09:01:29.517321  494126 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:01:29.518374  494126 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:01:29.519877  494126 config.go:182] Loaded profile config "cert-expiration-368536": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:01:29.519989  494126 config.go:182] Loaded profile config "kubernetes-upgrade-806701": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:01:29.520095  494126 config.go:182] Loaded profile config "old-k8s-version-295154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:01:29.520225  494126 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:01:29.546023  494126 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:01:29.546141  494126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:01:29.607775  494126 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:81 SystemTime:2025-11-29 09:01:29.596891851 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:01:29.607908  494126 docker.go:319] overlay module found
	I1129 09:01:29.610288  494126 out.go:179] * Using the docker driver based on user configuration
	I1129 09:01:29.611200  494126 start.go:309] selected driver: docker
	I1129 09:01:29.611220  494126 start.go:927] validating driver "docker" against <nil>
	I1129 09:01:29.611231  494126 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:01:29.611850  494126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:01:29.673266  494126 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:81 SystemTime:2025-11-29 09:01:29.662655452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:01:29.673484  494126 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:01:29.673822  494126 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:01:29.675454  494126 out.go:179] * Using Docker driver with root privileges
	I1129 09:01:29.679127  494126 cni.go:84] Creating CNI manager for ""
	I1129 09:01:29.679243  494126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:29.679264  494126 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:01:29.679351  494126 start.go:353] cluster config:
	{Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:01:29.680591  494126 out.go:179] * Starting "no-preload-924441" primary control-plane node in "no-preload-924441" cluster
	I1129 09:01:29.681517  494126 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:01:29.682533  494126 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:01:29.683845  494126 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:01:29.683975  494126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/config.json ...
	I1129 09:01:29.683971  494126 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:01:29.684042  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/config.json: {Name:mk4df9140f26fdbfe5b2addb71b44607d26b26a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:29.684181  494126 cache.go:107] acquiring lock: {Name:mka90f7eac55a6e5d6d9651fc108f327509b562f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684233  494126 cache.go:107] acquiring lock: {Name:mk2c250a4202b546a18f0cc7664314439a4ec834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684259  494126 cache.go:107] acquiring lock: {Name:mk976aaa4e01b0c9e83cc6925b8c3c72804bfa25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684288  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 09:01:29.684299  494126 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 144.373µs
	I1129 09:01:29.684315  494126 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 09:01:29.684321  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 09:01:29.684322  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 09:01:29.684332  494126 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 80.37µs
	I1129 09:01:29.684333  494126 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 119.913µs
	I1129 09:01:29.684341  494126 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 09:01:29.684344  494126 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 09:01:29.684332  494126 cache.go:107] acquiring lock: {Name:mkff44f5b6b961ddaa9acc3e74cf0480b0d2f776 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684358  494126 cache.go:107] acquiring lock: {Name:mk6080f4393a19fb5c4d6f436dce1a2bb1688f86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684378  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 09:01:29.684387  494126 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 58.113µs
	I1129 09:01:29.684395  494126 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 09:01:29.684399  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1129 09:01:29.684282  494126 cache.go:107] acquiring lock: {Name:mkb8e7a67c98a0b8caa208116d415323f5ca7ccc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684410  494126 cache.go:107] acquiring lock: {Name:mk47ee24ca074cb6cc1a641d737215686b099dc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684472  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 09:01:29.684482  494126 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 217.393µs
	I1129 09:01:29.684492  494126 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 09:01:29.684416  494126 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 61.464µs
	I1129 09:01:29.684504  494126 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 09:01:29.684517  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 09:01:29.684533  494126 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 171.692µs
	I1129 09:01:29.684552  494126 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 09:01:29.684643  494126 cache.go:107] acquiring lock: {Name:mk912246de843459c104f342794e23ecb1fc7a75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684790  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 09:01:29.684806  494126 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 226.111µs
	I1129 09:01:29.684824  494126 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 09:01:29.684840  494126 cache.go:87] Successfully saved all images to host disk.
	I1129 09:01:29.706829  494126 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:01:29.706854  494126 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:01:29.706878  494126 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:01:29.706918  494126 start.go:360] acquireMachinesLock for no-preload-924441: {Name:mkf9f3b6b30f178cf9b9d50a2dabce8e2c5d48f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.707056  494126 start.go:364] duration metric: took 99.455µs to acquireMachinesLock for "no-preload-924441"
	I1129 09:01:29.707090  494126 start.go:93] Provisioning new machine with config: &{Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:01:29.707206  494126 start.go:125] createHost starting for "" (driver="docker")
	I1129 09:01:28.461537  493486 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:01:28.461867  493486 start.go:159] libmachine.API.Create for "old-k8s-version-295154" (driver="docker")
	I1129 09:01:28.461917  493486 client.go:173] LocalClient.Create starting
	I1129 09:01:28.462009  493486 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem
	I1129 09:01:28.462065  493486 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:28.462089  493486 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:28.462160  493486 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem
	I1129 09:01:28.462186  493486 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:28.462205  493486 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:28.462679  493486 cli_runner.go:164] Run: docker network inspect old-k8s-version-295154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:01:28.481658  493486 cli_runner.go:211] docker network inspect old-k8s-version-295154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:01:28.481745  493486 network_create.go:284] running [docker network inspect old-k8s-version-295154] to gather additional debugging logs...
	I1129 09:01:28.481770  493486 cli_runner.go:164] Run: docker network inspect old-k8s-version-295154
	W1129 09:01:28.500619  493486 cli_runner.go:211] docker network inspect old-k8s-version-295154 returned with exit code 1
	I1129 09:01:28.500661  493486 network_create.go:287] error running [docker network inspect old-k8s-version-295154]: docker network inspect old-k8s-version-295154: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-295154 not found
	I1129 09:01:28.500677  493486 network_create.go:289] output of [docker network inspect old-k8s-version-295154]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-295154 not found
	
	** /stderr **
	I1129 09:01:28.500849  493486 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:28.518426  493486 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f69c672bf913 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:40:f4:ed:4f:ab} reservation:<nil>}
	I1129 09:01:28.519384  493486 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-96d20aff5877 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:01:e2:a3:b8:33} reservation:<nil>}
	I1129 09:01:28.520407  493486 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7906c56f869 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:29:75:e3:e0:7f} reservation:<nil>}
	I1129 09:01:28.521974  493486 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f90700}
	I1129 09:01:28.522028  493486 network_create.go:124] attempt to create docker network old-k8s-version-295154 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 09:01:28.522109  493486 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-295154 old-k8s-version-295154
	I1129 09:01:28.575478  493486 network_create.go:108] docker network old-k8s-version-295154 192.168.76.0/24 created
	I1129 09:01:28.575522  493486 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-295154" container
	I1129 09:01:28.575603  493486 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:01:28.593666  493486 cli_runner.go:164] Run: docker volume create old-k8s-version-295154 --label name.minikube.sigs.k8s.io=old-k8s-version-295154 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:01:28.612389  493486 oci.go:103] Successfully created a docker volume old-k8s-version-295154
	I1129 09:01:28.612501  493486 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-295154-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-295154 --entrypoint /usr/bin/test -v old-k8s-version-295154:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:01:29.238109  493486 oci.go:107] Successfully prepared a docker volume old-k8s-version-295154
	I1129 09:01:29.238162  493486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 09:01:29.238176  493486 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:01:29.238241  493486 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-295154:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 09:01:32.586626  493486 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-295154:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.348341473s)
	I1129 09:01:32.586660  493486 kic.go:203] duration metric: took 3.348481997s to extract preloaded images to volume ...
	W1129 09:01:32.586761  493486 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:01:32.586805  493486 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:01:32.586861  493486 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:01:32.650922  493486 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-295154 --name old-k8s-version-295154 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-295154 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-295154 --network old-k8s-version-295154 --ip 192.168.76.2 --volume old-k8s-version-295154:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:01:32.982372  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Running}}
	I1129 09:01:33.001073  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:33.021021  493486 cli_runner.go:164] Run: docker exec old-k8s-version-295154 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:01:33.078706  493486 oci.go:144] the created container "old-k8s-version-295154" has a running status.
	I1129 09:01:33.078890  493486 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa...
	I1129 09:01:33.213970  493486 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:01:33.251103  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:29.709142  494126 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:01:29.709367  494126 start.go:159] libmachine.API.Create for "no-preload-924441" (driver="docker")
	I1129 09:01:29.709398  494126 client.go:173] LocalClient.Create starting
	I1129 09:01:29.709475  494126 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem
	I1129 09:01:29.709526  494126 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:29.709553  494126 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:29.709629  494126 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem
	I1129 09:01:29.709661  494126 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:29.709679  494126 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:29.710082  494126 cli_runner.go:164] Run: docker network inspect no-preload-924441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:01:29.727862  494126 cli_runner.go:211] docker network inspect no-preload-924441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:01:29.727982  494126 network_create.go:284] running [docker network inspect no-preload-924441] to gather additional debugging logs...
	I1129 09:01:29.728011  494126 cli_runner.go:164] Run: docker network inspect no-preload-924441
	W1129 09:01:29.747053  494126 cli_runner.go:211] docker network inspect no-preload-924441 returned with exit code 1
	I1129 09:01:29.747092  494126 network_create.go:287] error running [docker network inspect no-preload-924441]: docker network inspect no-preload-924441: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-924441 not found
	I1129 09:01:29.747129  494126 network_create.go:289] output of [docker network inspect no-preload-924441]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-924441 not found
	
	** /stderr **
	I1129 09:01:29.747297  494126 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:29.769138  494126 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f69c672bf913 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:40:f4:ed:4f:ab} reservation:<nil>}
	I1129 09:01:29.769961  494126 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-96d20aff5877 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:01:e2:a3:b8:33} reservation:<nil>}
	I1129 09:01:29.770795  494126 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7906c56f869 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:29:75:e3:e0:7f} reservation:<nil>}
	I1129 09:01:29.771440  494126 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-aea341d97cf5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:fb:22:ff:e0:65} reservation:<nil>}
	I1129 09:01:29.771972  494126 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5ec7c7346e1b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:a5:df:dd:c8:cf} reservation:<nil>}
	I1129 09:01:29.772536  494126 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-ede9a8c5c6b0 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:6e:06:75:02:7a} reservation:<nil>}
	I1129 09:01:29.773382  494126 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00201aa40}
	I1129 09:01:29.773412  494126 network_create.go:124] attempt to create docker network no-preload-924441 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1129 09:01:29.773492  494126 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-924441 no-preload-924441
	I1129 09:01:29.826699  494126 network_create.go:108] docker network no-preload-924441 192.168.103.0/24 created
	I1129 09:01:29.826822  494126 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-924441" container
	I1129 09:01:29.826907  494126 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:01:29.848520  494126 cli_runner.go:164] Run: docker volume create no-preload-924441 --label name.minikube.sigs.k8s.io=no-preload-924441 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:01:29.870388  494126 oci.go:103] Successfully created a docker volume no-preload-924441
	I1129 09:01:29.870496  494126 cli_runner.go:164] Run: docker run --rm --name no-preload-924441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-924441 --entrypoint /usr/bin/test -v no-preload-924441:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:01:32.848045  494126 cli_runner.go:217] Completed: docker run --rm --name no-preload-924441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-924441 --entrypoint /usr/bin/test -v no-preload-924441:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.977502795s)
	I1129 09:01:32.848077  494126 oci.go:107] Successfully prepared a docker volume no-preload-924441
	I1129 09:01:32.848131  494126 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1129 09:01:32.848227  494126 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:01:32.848271  494126 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:01:32.848312  494126 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:01:32.909124  494126 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-924441 --name no-preload-924441 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-924441 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-924441 --network no-preload-924441 --ip 192.168.103.2 --volume no-preload-924441:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:01:33.229639  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Running}}
	I1129 09:01:33.257967  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:01:33.283525  494126 cli_runner.go:164] Run: docker exec no-preload-924441 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:01:33.358911  494126 oci.go:144] the created container "no-preload-924441" has a running status.
	I1129 09:01:33.358964  494126 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa...
	I1129 09:01:33.456248  494126 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:01:33.491041  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:01:33.515555  494126 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:01:33.515581  494126 kic_runner.go:114] Args: [docker exec --privileged no-preload-924441 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:01:33.567971  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:01:33.599907  494126 machine.go:94] provisionDockerMachine start ...
	I1129 09:01:33.599999  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:33.634873  494126 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.635521  494126 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:01:33.635590  494126 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:01:33.636667  494126 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34766->127.0.0.1:33063: read: connection reset by peer
	I1129 09:01:29.724136  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:29.724608  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:29.724657  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:29.724702  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:29.763194  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:29.763266  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:29.763286  460401 cri.go:89] found id: ""
	I1129 09:01:29.763304  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:29.763372  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.769877  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.774814  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:29.774887  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:29.810078  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:29.810105  460401 cri.go:89] found id: ""
	I1129 09:01:29.810116  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:29.810167  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.815272  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:29.815349  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:29.851653  460401 cri.go:89] found id: ""
	I1129 09:01:29.851680  460401 logs.go:282] 0 containers: []
	W1129 09:01:29.851691  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:29.851700  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:29.851773  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:29.883424  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:29.883449  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:29.883456  460401 cri.go:89] found id: ""
	I1129 09:01:29.883466  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:29.883537  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.889105  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.894072  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:29.894150  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:29.924971  460401 cri.go:89] found id: ""
	I1129 09:01:29.925006  460401 logs.go:282] 0 containers: []
	W1129 09:01:29.925019  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:29.925027  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:29.925129  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:29.954168  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:29.954194  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:29.954199  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:29.954203  460401 cri.go:89] found id: ""
	I1129 09:01:29.954214  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:29.954278  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.959542  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.964240  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.968754  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:29.968820  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:29.999663  460401 cri.go:89] found id: ""
	I1129 09:01:29.999685  460401 logs.go:282] 0 containers: []
	W1129 09:01:29.999694  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:29.999700  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:29.999780  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:30.029803  460401 cri.go:89] found id: ""
	I1129 09:01:30.029833  460401 logs.go:282] 0 containers: []
	W1129 09:01:30.029845  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:30.029859  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:30.029877  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:30.069873  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:30.069904  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:30.108923  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:30.108958  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:30.146649  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:30.146682  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:30.190480  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:30.190514  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:30.225134  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:30.225167  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:30.299416  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:30.299461  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:30.314711  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:30.314766  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:30.384833  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:30.384856  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:30.384879  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:30.420690  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:30.420720  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:30.476182  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:30.476221  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:30.507666  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:30.507698  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:30.536613  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:30.536640  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:33.076844  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:33.077304  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:33.077371  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:33.077426  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:33.111899  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:33.111922  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:33.111928  460401 cri.go:89] found id: ""
	I1129 09:01:33.111938  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:33.111995  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.117191  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.122615  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:33.122688  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:33.163794  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:33.163822  460401 cri.go:89] found id: ""
	I1129 09:01:33.163834  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:33.163897  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.170244  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:33.170334  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:33.203629  460401 cri.go:89] found id: ""
	I1129 09:01:33.203662  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.203675  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:33.203683  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:33.203759  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:33.248112  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:33.248142  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:33.248148  460401 cri.go:89] found id: ""
	I1129 09:01:33.248159  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:33.248226  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.255192  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.262339  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:33.262419  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:33.308727  460401 cri.go:89] found id: ""
	I1129 09:01:33.308855  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.308869  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:33.308878  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:33.309309  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:33.361181  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:33.361234  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:33.361241  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:33.361245  460401 cri.go:89] found id: ""
	I1129 09:01:33.361255  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:33.361343  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.368091  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.374495  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.380899  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:33.380965  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:33.430643  460401 cri.go:89] found id: ""
	I1129 09:01:33.430670  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.430681  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:33.430689  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:33.430771  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:33.467019  460401 cri.go:89] found id: ""
	I1129 09:01:33.467047  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.467058  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:33.467072  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:33.467091  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:33.529538  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:33.529588  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:33.591866  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:33.591912  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:33.664144  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:33.664179  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:33.701152  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:33.701195  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:33.735624  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:33.735669  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:33.774144  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:33.774175  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:33.808426  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:33.808461  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:33.898471  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:33.898509  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:33.914358  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:33.914394  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:33.978927  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:33.978954  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:33.978975  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:34.016239  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:34.016268  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:34.055208  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:34.055239  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:33.275806  493486 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:01:33.275832  493486 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-295154 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:01:33.349350  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:33.378383  493486 machine.go:94] provisionDockerMachine start ...
	I1129 09:01:33.378475  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:33.410015  493486 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.410367  493486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:01:33.410384  493486 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:01:33.577990  493486 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-295154
	
	I1129 09:01:33.578018  493486 ubuntu.go:182] provisioning hostname "old-k8s-version-295154"
	I1129 09:01:33.578086  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:33.609401  493486 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.609890  493486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:01:33.609953  493486 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-295154 && echo "old-k8s-version-295154" | sudo tee /etc/hostname
	I1129 09:01:33.789112  493486 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-295154
	
	I1129 09:01:33.789205  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:33.813423  493486 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.813741  493486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:01:33.813774  493486 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-295154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-295154/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-295154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:01:33.966671  493486 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:01:33.966701  493486 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-255825/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-255825/.minikube}
	I1129 09:01:33.966720  493486 ubuntu.go:190] setting up certificates
	I1129 09:01:33.966746  493486 provision.go:84] configureAuth start
	I1129 09:01:33.966809  493486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-295154
	I1129 09:01:33.987509  493486 provision.go:143] copyHostCerts
	I1129 09:01:33.987591  493486 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem, removing ...
	I1129 09:01:33.987609  493486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem
	I1129 09:01:33.987703  493486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem (1078 bytes)
	I1129 09:01:33.987854  493486 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem, removing ...
	I1129 09:01:33.987873  493486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem
	I1129 09:01:33.987926  493486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem (1123 bytes)
	I1129 09:01:33.988030  493486 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem, removing ...
	I1129 09:01:33.988043  493486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem
	I1129 09:01:33.988093  493486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem (1679 bytes)
	I1129 09:01:33.988197  493486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-295154 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-295154]
	I1129 09:01:34.173289  493486 provision.go:177] copyRemoteCerts
	I1129 09:01:34.173365  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:01:34.173409  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.192053  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.294293  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:01:34.313898  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1129 09:01:34.331337  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:01:34.348272  493486 provision.go:87] duration metric: took 381.510752ms to configureAuth
	I1129 09:01:34.348301  493486 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:01:34.348457  493486 config.go:182] Loaded profile config "old-k8s-version-295154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:01:34.348472  493486 machine.go:97] duration metric: took 970.068662ms to provisionDockerMachine
	I1129 09:01:34.348481  493486 client.go:176] duration metric: took 5.886553133s to LocalClient.Create
	I1129 09:01:34.348502  493486 start.go:167] duration metric: took 5.88663904s to libmachine.API.Create "old-k8s-version-295154"
	I1129 09:01:34.348512  493486 start.go:293] postStartSetup for "old-k8s-version-295154" (driver="docker")
	I1129 09:01:34.348520  493486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:01:34.348570  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:01:34.348614  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.366501  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.469910  493486 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:01:34.473823  493486 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:01:34.473855  493486 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:01:34.473868  493486 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/addons for local assets ...
	I1129 09:01:34.473922  493486 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/files for local assets ...
	I1129 09:01:34.474038  493486 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem -> 2594832.pem in /etc/ssl/certs
	I1129 09:01:34.474177  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:01:34.481912  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:34.502433  493486 start.go:296] duration metric: took 153.905912ms for postStartSetup
	I1129 09:01:34.502813  493486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-295154
	I1129 09:01:34.520071  493486 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/config.json ...
	I1129 09:01:34.520308  493486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:01:34.520347  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.539111  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.640199  493486 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:01:34.644901  493486 start.go:128] duration metric: took 6.185289215s to createHost
	I1129 09:01:34.644928  493486 start.go:83] releasing machines lock for "old-k8s-version-295154", held for 6.185484113s
	I1129 09:01:34.644991  493486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-295154
	I1129 09:01:34.662525  493486 ssh_runner.go:195] Run: cat /version.json
	I1129 09:01:34.662583  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.662584  493486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:01:34.662648  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.679837  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.681115  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.833568  493486 ssh_runner.go:195] Run: systemctl --version
	I1129 09:01:34.840355  493486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:01:34.844844  493486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:01:34.844907  493486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:01:34.869137  493486 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:01:34.869161  493486 start.go:496] detecting cgroup driver to use...
	I1129 09:01:34.869194  493486 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:01:34.869251  493486 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:01:34.883461  493486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:01:34.895885  493486 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:01:34.895942  493486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:01:34.912002  493486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:01:34.929350  493486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:01:35.015369  493486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:01:35.101537  493486 docker.go:234] disabling docker service ...
	I1129 09:01:35.101597  493486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:01:35.120759  493486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:01:35.133226  493486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:01:35.217122  493486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:01:35.301702  493486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:01:35.314440  493486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:01:35.328312  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1129 09:01:35.338331  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:01:35.346975  493486 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1129 09:01:35.347033  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1129 09:01:35.355511  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:35.363986  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:01:35.372342  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:35.380589  493486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:01:35.388205  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:01:35.396344  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:01:35.404459  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:01:35.412783  493486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:01:35.420177  493486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:01:35.427378  493486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:35.508150  493486 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:01:35.605801  493486 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:01:35.605868  493486 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:01:35.610095  493486 start.go:564] Will wait 60s for crictl version
	I1129 09:01:35.610140  493486 ssh_runner.go:195] Run: which crictl
	I1129 09:01:35.613826  493486 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:01:35.640869  493486 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:01:35.640947  493486 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:35.662573  493486 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:35.686990  493486 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1129 09:01:35.688126  493486 cli_runner.go:164] Run: docker network inspect old-k8s-version-295154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:35.705269  493486 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:01:35.709565  493486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:35.720029  493486 kubeadm.go:884] updating cluster {Name:old-k8s-version-295154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-295154 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:01:35.720146  493486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 09:01:35.720192  493486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:01:35.745337  493486 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:01:35.745359  493486 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:01:35.745433  493486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:01:35.768552  493486 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:01:35.768573  493486 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:01:35.768582  493486 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1129 09:01:35.768708  493486 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-295154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-295154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:01:35.768800  493486 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:01:35.793684  493486 cni.go:84] Creating CNI manager for ""
	I1129 09:01:35.793704  493486 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:35.793722  493486 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:01:35.793760  493486 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-295154 NodeName:old-k8s-version-295154 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:01:35.793881  493486 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-295154"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:01:35.793941  493486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1129 09:01:35.801702  493486 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:01:35.801779  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:01:35.809370  493486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1129 09:01:35.821645  493486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:01:35.837123  493486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1129 09:01:35.849282  493486 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:01:35.852777  493486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:35.862291  493486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:35.945522  493486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:01:35.967020  493486 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154 for IP: 192.168.76.2
	I1129 09:01:35.967046  493486 certs.go:195] generating shared ca certs ...
	I1129 09:01:35.967066  493486 certs.go:227] acquiring lock for ca certs: {Name:mk5e6bcae0a6944966b241f3c6197a472703c991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:35.967208  493486 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key
	I1129 09:01:35.967259  493486 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key
	I1129 09:01:35.967269  493486 certs.go:257] generating profile certs ...
	I1129 09:01:35.967334  493486 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.key
	I1129 09:01:35.967347  493486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt with IP's: []
	I1129 09:01:36.097254  493486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt ...
	I1129 09:01:36.097290  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: {Name:mk21cfae97f1407d02cd99fe2a74be759b699397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.097496  493486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.key ...
	I1129 09:01:36.097514  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.key: {Name:mk0736bb845004e9c4d4a2d8602930ec0568eec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.097631  493486 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72
	I1129 09:01:36.097693  493486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1129 09:01:36.144552  493486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72 ...
	I1129 09:01:36.144579  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72: {Name:mk3fedcec97acb487835213600ee8b696c362f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.144774  493486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72 ...
	I1129 09:01:36.144793  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72: {Name:mk9dc52d2daf1391895a4ee3c561f559be0e2755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.144904  493486 certs.go:382] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72 -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt
	I1129 09:01:36.145012  493486 certs.go:386] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72 -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key
	I1129 09:01:36.145117  493486 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key
	I1129 09:01:36.145138  493486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt with IP's: []
	I1129 09:01:36.307914  493486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt ...
	I1129 09:01:36.307946  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt: {Name:mk698ad1b9e2e29d385fd97b123d5b48273c6d5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.308144  493486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key ...
	I1129 09:01:36.308172  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key: {Name:mkcfd3db96260b6b8677060f32dcbd4dd8f838bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.308432  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem (1338 bytes)
	W1129 09:01:36.308490  493486 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483_empty.pem, impossibly tiny 0 bytes
	I1129 09:01:36.308506  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:01:36.308543  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:01:36.308590  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:01:36.308633  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem (1679 bytes)
	I1129 09:01:36.308689  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:36.309360  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:01:36.328372  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:01:36.345872  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:01:36.363285  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:01:36.380427  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1129 09:01:36.397563  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:01:36.414929  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:01:36.432334  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:01:36.449233  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /usr/share/ca-certificates/2594832.pem (1708 bytes)
	I1129 09:01:36.469085  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:01:36.485869  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem --> /usr/share/ca-certificates/259483.pem (1338 bytes)
	I1129 09:01:36.502784  493486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:01:36.515208  493486 ssh_runner.go:195] Run: openssl version
	I1129 09:01:36.521390  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:01:36.529514  493486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:36.533021  493486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:36.533062  493486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:36.567579  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:01:36.576162  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259483.pem && ln -fs /usr/share/ca-certificates/259483.pem /etc/ssl/certs/259483.pem"
	I1129 09:01:36.584343  493486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259483.pem
	I1129 09:01:36.588122  493486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:35 /usr/share/ca-certificates/259483.pem
	I1129 09:01:36.588176  493486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259483.pem
	I1129 09:01:36.626659  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259483.pem /etc/ssl/certs/51391683.0"
	I1129 09:01:36.635780  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2594832.pem && ln -fs /usr/share/ca-certificates/2594832.pem /etc/ssl/certs/2594832.pem"
	I1129 09:01:36.644862  493486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2594832.pem
	I1129 09:01:36.648851  493486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:35 /usr/share/ca-certificates/2594832.pem
	I1129 09:01:36.648906  493486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2594832.pem
	I1129 09:01:36.691340  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2594832.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:01:36.701173  493486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:01:36.705050  493486 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:01:36.705110  493486 kubeadm.go:401] StartCluster: {Name:old-k8s-version-295154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-295154 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:01:36.705201  493486 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:01:36.705272  493486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:01:36.734535  493486 cri.go:89] found id: ""
	I1129 09:01:36.734592  493486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:01:36.743400  493486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:01:36.751273  493486 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:01:36.751332  493486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:01:36.760386  493486 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:01:36.760404  493486 kubeadm.go:158] found existing configuration files:
	
	I1129 09:01:36.760450  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:01:36.768796  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:01:36.768854  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:01:36.776326  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:01:36.784663  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:01:36.784720  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:01:36.793650  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:01:36.801817  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:01:36.801887  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:01:36.811081  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:01:36.819075  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:01:36.819130  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:01:36.827369  493486 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:01:36.885752  493486 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1129 09:01:36.885824  493486 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:01:36.932588  493486 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:01:36.932993  493486 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:01:36.933139  493486 kubeadm.go:319] OS: Linux
	I1129 09:01:36.933232  493486 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:01:36.933332  493486 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:01:36.933468  493486 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:01:36.933539  493486 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:01:36.933597  493486 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:01:36.933656  493486 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:01:36.933717  493486 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:01:36.933794  493486 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:01:37.018039  493486 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:01:37.018169  493486 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:01:37.018319  493486 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1129 09:01:37.171075  493486 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:01:37.173428  493486 out.go:252]   - Generating certificates and keys ...
	I1129 09:01:37.173535  493486 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:01:37.173613  493486 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:01:37.301964  493486 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:01:37.410711  493486 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:01:37.550821  493486 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:01:37.787553  493486 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:01:37.889172  493486 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:01:37.889414  493486 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-295154] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:01:38.063017  493486 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:01:38.063214  493486 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-295154] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:01:38.202234  493486 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:01:38.262563  493486 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:01:36.787780  494126 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-924441
	
	I1129 09:01:36.787807  494126 ubuntu.go:182] provisioning hostname "no-preload-924441"
	I1129 09:01:36.787868  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:36.808836  494126 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:36.809153  494126 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:01:36.809173  494126 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-924441 && echo "no-preload-924441" | sudo tee /etc/hostname
	I1129 09:01:36.973090  494126 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-924441
	
	I1129 09:01:36.973172  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:36.993095  494126 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:36.993348  494126 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:01:36.993366  494126 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-924441' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-924441/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-924441' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:01:37.147252  494126 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:01:37.147286  494126 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-255825/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-255825/.minikube}
	I1129 09:01:37.147336  494126 ubuntu.go:190] setting up certificates
	I1129 09:01:37.147350  494126 provision.go:84] configureAuth start
	I1129 09:01:37.147407  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-924441
	I1129 09:01:37.167771  494126 provision.go:143] copyHostCerts
	I1129 09:01:37.167841  494126 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem, removing ...
	I1129 09:01:37.167856  494126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem
	I1129 09:01:37.167941  494126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem (1078 bytes)
	I1129 09:01:37.168073  494126 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem, removing ...
	I1129 09:01:37.168087  494126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem
	I1129 09:01:37.168135  494126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem (1123 bytes)
	I1129 09:01:37.168246  494126 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem, removing ...
	I1129 09:01:37.168259  494126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem
	I1129 09:01:37.168304  494126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem (1679 bytes)
	I1129 09:01:37.168383  494126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem org=jenkins.no-preload-924441 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-924441]
	I1129 09:01:37.302569  494126 provision.go:177] copyRemoteCerts
	I1129 09:01:37.302625  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:01:37.302676  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.320965  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.425520  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:01:37.446589  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:01:37.463963  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:01:37.480486  494126 provision.go:87] duration metric: took 333.119398ms to configureAuth
	I1129 09:01:37.480511  494126 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:01:37.480667  494126 config.go:182] Loaded profile config "no-preload-924441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:01:37.480680  494126 machine.go:97] duration metric: took 3.880753165s to provisionDockerMachine
	I1129 09:01:37.480691  494126 client.go:176] duration metric: took 7.771282469s to LocalClient.Create
	I1129 09:01:37.480714  494126 start.go:167] duration metric: took 7.771346771s to libmachine.API.Create "no-preload-924441"
	I1129 09:01:37.480726  494126 start.go:293] postStartSetup for "no-preload-924441" (driver="docker")
	I1129 09:01:37.480750  494126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:01:37.480814  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:01:37.480883  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.498996  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.602864  494126 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:01:37.606394  494126 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:01:37.606428  494126 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:01:37.606439  494126 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/addons for local assets ...
	I1129 09:01:37.606502  494126 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/files for local assets ...
	I1129 09:01:37.606593  494126 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem -> 2594832.pem in /etc/ssl/certs
	I1129 09:01:37.606724  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:01:37.614670  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:37.635134  494126 start.go:296] duration metric: took 154.380805ms for postStartSetup
	I1129 09:01:37.635554  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-924441
	I1129 09:01:37.655528  494126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/config.json ...
	I1129 09:01:37.655850  494126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:01:37.655900  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.677317  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.781275  494126 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:01:37.786042  494126 start.go:128] duration metric: took 8.07881841s to createHost
	I1129 09:01:37.786069  494126 start.go:83] releasing machines lock for "no-preload-924441", held for 8.078998368s
	I1129 09:01:37.786141  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-924441
	I1129 09:01:37.805459  494126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:01:37.805494  494126 ssh_runner.go:195] Run: cat /version.json
	I1129 09:01:37.805552  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.805561  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.824515  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.825042  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.978797  494126 ssh_runner.go:195] Run: systemctl --version
	I1129 09:01:37.985561  494126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:01:37.990121  494126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:01:37.990198  494126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:01:38.014806  494126 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:01:38.014833  494126 start.go:496] detecting cgroup driver to use...
	I1129 09:01:38.014872  494126 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:01:38.014922  494126 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:01:38.028890  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:01:38.040635  494126 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:01:38.040704  494126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:01:38.059274  494126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:01:38.079903  494126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:01:38.160895  494126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:01:38.248638  494126 docker.go:234] disabling docker service ...
	I1129 09:01:38.248693  494126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:01:38.270699  494126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:01:38.283241  494126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:01:38.364018  494126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:01:38.451578  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:01:38.464900  494126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:01:38.478711  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:01:38.488688  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:01:38.497188  494126 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1129 09:01:38.497235  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1129 09:01:38.506143  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:38.514500  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:01:38.522578  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:38.530605  494126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:01:38.538074  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:01:38.546395  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:01:38.554633  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:01:38.564192  494126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:01:38.571328  494126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:01:38.578488  494126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:38.657072  494126 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:01:38.731899  494126 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:01:38.731970  494126 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:01:38.736165  494126 start.go:564] Will wait 60s for crictl version
	I1129 09:01:38.736223  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:38.739821  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:01:38.765727  494126 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:01:38.765799  494126 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:38.788554  494126 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:38.813801  494126 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:01:38.554215  493486 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:01:38.554337  493486 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:01:38.871587  493486 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:01:39.076048  493486 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:01:39.365556  493486 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:01:39.428949  493486 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:01:39.429579  493486 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:01:39.438444  493486 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:01:38.814940  494126 cli_runner.go:164] Run: docker network inspect no-preload-924441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:38.832444  494126 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1129 09:01:38.836556  494126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:38.846826  494126 kubeadm.go:884] updating cluster {Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:01:38.846940  494126 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:01:38.846988  494126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:01:38.875513  494126 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1129 09:01:38.875537  494126 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1129 09:01:38.875606  494126 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:38.875606  494126 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:38.875633  494126 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:38.875642  494126 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:38.875663  494126 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:38.875672  494126 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:38.875613  494126 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1129 09:01:38.875710  494126 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:38.877065  494126 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:38.877082  494126 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:38.877098  494126 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:38.877104  494126 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:38.877132  494126 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:38.877185  494126 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:38.877233  494126 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:38.877189  494126 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1129 09:01:39.045541  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1129 09:01:39.045605  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.049466  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1129 09:01:39.049525  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.055696  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1129 09:01:39.055787  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.065913  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1129 09:01:39.065987  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.071326  494126 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1129 09:01:39.071386  494126 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.071433  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.072494  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1129 09:01:39.072560  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.074055  494126 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1129 09:01:39.074103  494126 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.074155  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.079805  494126 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1129 09:01:39.079853  494126 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.079906  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.090225  494126 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1129 09:01:39.090271  494126 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.090279  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.090318  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.094954  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1129 09:01:39.095016  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.096356  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.096365  494126 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1129 09:01:39.096402  494126 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.096438  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.096440  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.108053  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1129 09:01:39.108111  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1129 09:01:39.125198  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.125300  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.125361  494126 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1129 09:01:39.125408  494126 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.125455  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.128374  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.132565  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.132640  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.138113  494126 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1129 09:01:39.138163  494126 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1129 09:01:39.138200  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.167013  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.167128  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.167330  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.167330  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.167996  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.173113  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.173171  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:01:39.214078  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1129 09:01:39.214193  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:01:39.214389  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.214576  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.220552  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1129 09:01:39.220649  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:01:39.220857  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1129 09:01:39.220895  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1129 09:01:39.222433  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.222493  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:01:39.222587  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1129 09:01:39.222669  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:01:39.275608  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.275622  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1129 09:01:39.275679  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1129 09:01:39.275707  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1129 09:01:39.275716  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:01:39.287672  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:01:39.287708  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1129 09:01:39.287708  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1129 09:01:39.287808  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1129 09:01:39.287825  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:01:39.339051  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1129 09:01:39.339082  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1129 09:01:39.339092  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1129 09:01:39.339110  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1129 09:01:39.339137  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1129 09:01:39.339173  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:01:39.339202  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1129 09:01:39.339317  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1129 09:01:39.424948  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1129 09:01:39.424997  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1129 09:01:39.425030  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1129 09:01:39.425058  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1129 09:01:36.592807  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:36.593240  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:36.593304  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:36.593360  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:36.620981  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:36.621002  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:36.621008  460401 cri.go:89] found id: ""
	I1129 09:01:36.621018  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:36.621079  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.627593  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.632350  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:36.632420  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:36.660070  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:36.660091  460401 cri.go:89] found id: ""
	I1129 09:01:36.660100  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:36.660156  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.664644  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:36.664720  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:36.696935  460401 cri.go:89] found id: ""
	I1129 09:01:36.696967  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.696977  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:36.696985  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:36.697045  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:36.726832  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:36.726857  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:36.726864  460401 cri.go:89] found id: ""
	I1129 09:01:36.726874  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:36.726928  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.732693  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.737783  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:36.737848  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:36.765201  460401 cri.go:89] found id: ""
	I1129 09:01:36.765229  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.765238  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:36.765245  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:36.765300  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:36.795203  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:36.795231  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:36.795237  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:36.795242  460401 cri.go:89] found id: ""
	I1129 09:01:36.795251  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:36.795316  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.801008  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.806325  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.811017  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:36.811088  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:36.840359  460401 cri.go:89] found id: ""
	I1129 09:01:36.840386  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.840397  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:36.840406  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:36.840469  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:36.874045  460401 cri.go:89] found id: ""
	I1129 09:01:36.874068  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.874075  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:36.874085  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:36.874099  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:36.950404  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:36.950426  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:36.950442  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:36.994232  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:36.994264  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:37.049507  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:37.049546  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:37.087133  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:37.087165  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:37.117577  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:37.117602  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:37.154176  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:37.154210  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:37.197090  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:37.197121  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:37.240775  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:37.240811  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:37.269234  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:37.269260  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:37.312948  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:37.312979  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:37.348500  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:37.348527  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:37.435755  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:37.435786  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:39.440026  493486 out.go:252]   - Booting up control plane ...
	I1129 09:01:39.440161  493486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:01:39.440285  493486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:01:39.440970  493486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:01:39.459308  493486 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:01:39.460971  493486 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:01:39.461057  493486 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:01:39.610284  493486 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1129 09:01:39.952440  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:39.952996  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:39.953076  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:39.953145  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:39.990073  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:39.990100  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:39.990107  460401 cri.go:89] found id: ""
	I1129 09:01:39.990117  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:39.990183  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.996871  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.002374  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:40.002458  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:40.036502  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:40.036525  460401 cri.go:89] found id: ""
	I1129 09:01:40.036542  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:40.036600  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.044171  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:40.044261  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:40.084048  460401 cri.go:89] found id: ""
	I1129 09:01:40.084165  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.084184  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:40.084195  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:40.084329  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:40.116869  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:40.116899  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:40.116905  460401 cri.go:89] found id: ""
	I1129 09:01:40.116916  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:40.116982  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.123222  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.128079  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:40.128146  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:40.159071  460401 cri.go:89] found id: ""
	I1129 09:01:40.159101  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.159112  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:40.159120  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:40.159178  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:40.191945  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:40.191973  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:40.191979  460401 cri.go:89] found id: ""
	I1129 09:01:40.191990  460401 logs.go:282] 2 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:40.192055  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.197191  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.202276  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:40.202350  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:40.236481  460401 cri.go:89] found id: ""
	I1129 09:01:40.236510  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.236521  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:40.236528  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:40.236597  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:40.266476  460401 cri.go:89] found id: ""
	I1129 09:01:40.266505  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.266516  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:40.266529  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:40.266547  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:40.310670  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:40.310713  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:40.362446  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:40.362487  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:40.399108  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:40.399138  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:40.435770  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:40.435799  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:40.485497  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:40.485541  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:40.502944  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:40.502977  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:40.592582  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:40.592610  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:40.592626  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:40.634792  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:40.634828  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:40.678348  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:40.678382  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:40.797799  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:40.797849  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:40.854148  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:40.854196  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:43.404360  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:43.404858  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:43.404925  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:43.404996  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:43.435800  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:43.435836  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:43.435843  460401 cri.go:89] found id: ""
	I1129 09:01:43.435854  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:43.435923  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.441287  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.445761  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:43.445837  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:43.474830  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:43.474859  460401 cri.go:89] found id: ""
	I1129 09:01:43.474870  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:43.474932  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.481397  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:43.481483  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:43.513967  460401 cri.go:89] found id: ""
	I1129 09:01:43.513995  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.514006  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:43.514014  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:43.514074  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:43.550388  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:43.550416  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:43.550421  460401 cri.go:89] found id: ""
	I1129 09:01:43.550431  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:43.550505  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.557316  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.563173  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:43.563248  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:43.599482  460401 cri.go:89] found id: ""
	I1129 09:01:43.599524  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.599535  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:43.599545  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:43.599611  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:43.637030  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:43.637053  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:43.637059  460401 cri.go:89] found id: ""
	I1129 09:01:43.637069  460401 logs.go:282] 2 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:43.637130  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.643786  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.650011  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:43.650089  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:43.687244  460401 cri.go:89] found id: ""
	I1129 09:01:43.687273  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.687295  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:43.687303  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:43.687372  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:43.726453  460401 cri.go:89] found id: ""
	I1129 09:01:43.726490  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.726501  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:43.726515  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:43.726533  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:43.795442  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:43.795490  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:43.841417  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:43.841457  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:43.888511  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:43.888554  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:43.930753  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:43.930789  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:44.043358  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:44.043410  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:44.065065  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:44.065107  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:44.112915  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:44.112958  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:44.174077  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:44.174120  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:44.247887  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:44.247909  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:44.247927  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:44.290842  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:44.290882  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:44.335297  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:44.335330  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:39.522040  494126 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1129 09:01:39.522116  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1129 09:01:39.664265  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1129 09:01:39.664314  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:01:39.664386  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:01:40.291377  494126 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1129 09:01:40.291450  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:40.811289  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.146868238s)
	I1129 09:01:40.811331  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1129 09:01:40.811358  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:01:40.811407  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:01:40.811531  494126 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1129 09:01:40.811570  494126 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:40.811610  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:41.858427  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.046983131s)
	I1129 09:01:41.858463  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1129 09:01:41.858488  494126 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:01:41.858484  494126 ssh_runner.go:235] Completed: which crictl: (1.046843529s)
	I1129 09:01:41.858549  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:01:41.858557  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:43.352594  494126 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.494004994s)
	I1129 09:01:43.352634  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.49406142s)
	I1129 09:01:43.352657  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1129 09:01:43.352684  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:01:43.352721  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:43.352741  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:01:44.495181  494126 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.142420788s)
	I1129 09:01:44.495251  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.142485031s)
	I1129 09:01:44.495274  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:44.495280  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1129 09:01:44.495307  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:01:44.495357  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:01:44.611298  493486 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002099 seconds
	I1129 09:01:44.611461  493486 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:01:44.626505  493486 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:01:45.150669  493486 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:01:45.150981  493486 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-295154 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:01:45.666153  493486 kubeadm.go:319] [bootstrap-token] Using token: fc3siq.brm7sjv6bjwb7j34
	I1129 09:01:45.667757  493486 out.go:252]   - Configuring RBAC rules ...
	I1129 09:01:45.667991  493486 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:01:45.673404  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:01:45.685336  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:01:45.691974  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:01:45.695311  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:01:45.698699  493486 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:01:45.712796  493486 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:01:45.913473  493486 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:01:46.081267  493486 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:01:46.081993  493486 kubeadm.go:319] 
	I1129 09:01:46.082087  493486 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:01:46.082095  493486 kubeadm.go:319] 
	I1129 09:01:46.082160  493486 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:01:46.082179  493486 kubeadm.go:319] 
	I1129 09:01:46.082199  493486 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:01:46.082251  493486 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:01:46.082302  493486 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:01:46.082308  493486 kubeadm.go:319] 
	I1129 09:01:46.082372  493486 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:01:46.082377  493486 kubeadm.go:319] 
	I1129 09:01:46.082434  493486 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:01:46.082445  493486 kubeadm.go:319] 
	I1129 09:01:46.082520  493486 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:01:46.082627  493486 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:01:46.082750  493486 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:01:46.082756  493486 kubeadm.go:319] 
	I1129 09:01:46.082891  493486 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:01:46.083019  493486 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:01:46.083030  493486 kubeadm.go:319] 
	I1129 09:01:46.083149  493486 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fc3siq.brm7sjv6bjwb7j34 \
	I1129 09:01:46.083319  493486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 \
	I1129 09:01:46.083366  493486 kubeadm.go:319] 	--control-plane 
	I1129 09:01:46.083383  493486 kubeadm.go:319] 
	I1129 09:01:46.083539  493486 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:01:46.083561  493486 kubeadm.go:319] 
	I1129 09:01:46.083696  493486 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fc3siq.brm7sjv6bjwb7j34 \
	I1129 09:01:46.083889  493486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 
	I1129 09:01:46.087692  493486 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:01:46.087874  493486 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:01:46.087925  493486 cni.go:84] Creating CNI manager for ""
	I1129 09:01:46.087942  493486 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:46.089437  493486 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:01:46.093295  493486 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:01:46.100033  493486 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1129 09:01:46.100061  493486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:01:46.118046  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:01:47.108562  493486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:01:47.108767  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:47.108838  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-295154 minikube.k8s.io/updated_at=2025_11_29T09_01_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=old-k8s-version-295154 minikube.k8s.io/primary=true
	I1129 09:01:47.209163  493486 ops.go:34] apiserver oom_adj: -16
	I1129 09:01:47.209168  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:47.709726  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:48.209857  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:44.521775  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1129 09:01:44.521916  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:01:45.636811  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.141419574s)
	I1129 09:01:45.636849  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1129 09:01:45.636857  494126 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.114924181s)
	I1129 09:01:45.636879  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1129 09:01:45.636882  494126 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:01:45.636902  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1129 09:01:45.636924  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:01:48.452908  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.815950505s)
	I1129 09:01:48.452936  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1129 09:01:48.452972  494126 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:01:48.453041  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:01:49.370622  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1129 09:01:49.370663  494126 cache_images.go:125] Successfully loaded all cached images
	I1129 09:01:49.370668  494126 cache_images.go:94] duration metric: took 10.495116704s to LoadCachedImages
	I1129 09:01:49.370682  494126 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1129 09:01:49.370811  494126 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-924441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:01:49.370873  494126 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:01:49.397690  494126 cni.go:84] Creating CNI manager for ""
	I1129 09:01:49.397714  494126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:49.397740  494126 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:01:49.397786  494126 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-924441 NodeName:no-preload-924441 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:01:49.397929  494126 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-924441"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:01:49.397999  494126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:01:49.407101  494126 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1129 09:01:49.407180  494126 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1129 09:01:49.415958  494126 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1129 09:01:49.415978  494126 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256
	I1129 09:01:49.416026  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:01:49.416047  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1129 09:01:49.415978  494126 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I1129 09:01:49.416149  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1129 09:01:49.429834  494126 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1129 09:01:49.429872  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1129 09:01:49.429915  494126 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1129 09:01:49.429924  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1129 09:01:49.429943  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1129 09:01:49.438987  494126 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1129 09:01:49.439024  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1129 09:01:46.884140  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:48.710027  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:49.210030  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:49.709395  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:50.209866  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:50.709354  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:51.209979  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:51.710291  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:52.209895  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:52.709970  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:53.209937  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:49.969644  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:01:49.978574  494126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1129 09:01:49.992833  494126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:01:50.009876  494126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1129 09:01:50.023695  494126 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:01:50.027747  494126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:50.038376  494126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:50.121247  494126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:01:50.149394  494126 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441 for IP: 192.168.103.2
	I1129 09:01:50.149417  494126 certs.go:195] generating shared ca certs ...
	I1129 09:01:50.149438  494126 certs.go:227] acquiring lock for ca certs: {Name:mk5e6bcae0a6944966b241f3c6197a472703c991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.149602  494126 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key
	I1129 09:01:50.149703  494126 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key
	I1129 09:01:50.149717  494126 certs.go:257] generating profile certs ...
	I1129 09:01:50.149797  494126 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.key
	I1129 09:01:50.149812  494126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt with IP's: []
	I1129 09:01:50.352856  494126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt ...
	I1129 09:01:50.352896  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: {Name:mk24ad5255d5c075502606493622eaafcc9932fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.353102  494126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.key ...
	I1129 09:01:50.353115  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.key: {Name:mkdb2263ef25fafc1ea0385357022f8199c8aa35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.353223  494126 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b
	I1129 09:01:50.353240  494126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1129 09:01:50.513341  494126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b ...
	I1129 09:01:50.513379  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b: {Name:mk3f760c06958b6df21bcc9bde3527a0c97ad882 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.513582  494126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b ...
	I1129 09:01:50.513601  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b: {Name:mk4c8be15a8f6eca407c52c7afdc7ecb10357a29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.513678  494126 certs.go:382] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt
	I1129 09:01:50.513771  494126 certs.go:386] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key
	I1129 09:01:50.513831  494126 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key
	I1129 09:01:50.513847  494126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt with IP's: []
	I1129 09:01:50.651114  494126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt ...
	I1129 09:01:50.651146  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt: {Name:mkbdace4e62ecdfbe11ae904155295b956ffc842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.651330  494126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key ...
	I1129 09:01:50.651343  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key: {Name:mk14d837fb2449197c689047daf9f07db1da4b8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.651522  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem (1338 bytes)
	W1129 09:01:50.651563  494126 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483_empty.pem, impossibly tiny 0 bytes
	I1129 09:01:50.651573  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:01:50.651652  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:01:50.651691  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:01:50.651714  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem (1679 bytes)
	I1129 09:01:50.651769  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:50.652337  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:01:50.672071  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:01:50.691184  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:01:50.711306  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:01:50.730860  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:01:50.750662  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1129 09:01:50.771690  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:01:50.791789  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:01:50.811356  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem --> /usr/share/ca-certificates/259483.pem (1338 bytes)
	I1129 09:01:50.833983  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /usr/share/ca-certificates/2594832.pem (1708 bytes)
	I1129 09:01:50.853036  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:01:50.871262  494126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:01:50.885099  494126 ssh_runner.go:195] Run: openssl version
	I1129 09:01:50.892072  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259483.pem && ln -fs /usr/share/ca-certificates/259483.pem /etc/ssl/certs/259483.pem"
	I1129 09:01:50.901864  494126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259483.pem
	I1129 09:01:50.906616  494126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:35 /usr/share/ca-certificates/259483.pem
	I1129 09:01:50.906675  494126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259483.pem
	I1129 09:01:50.943595  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259483.pem /etc/ssl/certs/51391683.0"
	I1129 09:01:50.953459  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2594832.pem && ln -fs /usr/share/ca-certificates/2594832.pem /etc/ssl/certs/2594832.pem"
	I1129 09:01:50.962610  494126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2594832.pem
	I1129 09:01:50.966703  494126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:35 /usr/share/ca-certificates/2594832.pem
	I1129 09:01:50.966778  494126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2594832.pem
	I1129 09:01:51.002253  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2594832.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:01:51.012487  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:01:51.022391  494126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:51.026710  494126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:51.026814  494126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:51.063394  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:01:51.073278  494126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:01:51.077328  494126 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:01:51.077396  494126 kubeadm.go:401] StartCluster: {Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:01:51.077489  494126 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:01:51.077532  494126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:01:51.106096  494126 cri.go:89] found id: ""
	I1129 09:01:51.106183  494126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:01:51.115333  494126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:01:51.123937  494126 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:01:51.124003  494126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:01:51.132534  494126 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:01:51.132560  494126 kubeadm.go:158] found existing configuration files:
	
	I1129 09:01:51.132605  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:01:51.140877  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:01:51.140937  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:01:51.149370  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:01:51.157660  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:01:51.157716  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:01:51.165600  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:01:51.173968  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:01:51.174023  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:01:51.182141  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:01:51.190488  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:01:51.190548  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:01:51.198568  494126 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:01:51.257848  494126 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:01:51.317135  494126 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:01:51.885035  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1129 09:01:51.885110  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:51.885188  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:51.917617  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:01:51.917638  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:51.917644  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:51.917647  460401 cri.go:89] found id: ""
	I1129 09:01:51.917655  460401 logs.go:282] 3 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:51.917717  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.923877  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.929304  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.934465  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:51.934561  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:51.963685  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:51.963708  460401 cri.go:89] found id: ""
	I1129 09:01:51.963719  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:51.963801  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.968956  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:51.969028  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:51.996971  460401 cri.go:89] found id: ""
	I1129 09:01:51.997000  460401 logs.go:282] 0 containers: []
	W1129 09:01:51.997007  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:51.997013  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:51.997078  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:52.028822  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:52.028850  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:52.028856  460401 cri.go:89] found id: ""
	I1129 09:01:52.028866  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:52.028936  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.034812  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.039943  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:52.040009  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:52.069835  460401 cri.go:89] found id: ""
	I1129 09:01:52.069866  460401 logs.go:282] 0 containers: []
	W1129 09:01:52.069878  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:52.069886  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:52.069952  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:52.104321  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:52.104340  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:52.104344  460401 cri.go:89] found id: ""
	I1129 09:01:52.104352  460401 logs.go:282] 2 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:52.104402  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.109901  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.114778  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:52.114862  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:52.144981  460401 cri.go:89] found id: ""
	I1129 09:01:52.145005  460401 logs.go:282] 0 containers: []
	W1129 09:01:52.145013  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:52.145019  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:52.145069  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:52.174604  460401 cri.go:89] found id: ""
	I1129 09:01:52.174632  460401 logs.go:282] 0 containers: []
	W1129 09:01:52.174641  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:52.174651  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:52.174665  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:52.207427  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:52.207458  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:52.249558  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:52.249600  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:52.300742  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:52.300785  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:52.385321  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:52.385365  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:52.405491  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:52.405533  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:52.448465  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:52.448502  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:52.489466  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:52.489506  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:52.534107  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:52.534146  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:52.572361  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:52.572401  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:52.606656  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:52.606692  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1129 09:01:53.710005  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:54.209471  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:54.709414  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:55.209967  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:55.709378  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:56.210032  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:56.709982  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:57.209266  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:57.709968  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:58.209425  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:58.303052  493486 kubeadm.go:1114] duration metric: took 11.19438409s to wait for elevateKubeSystemPrivileges
	I1129 09:01:58.303107  493486 kubeadm.go:403] duration metric: took 21.598001105s to StartCluster
	I1129 09:01:58.303162  493486 settings.go:142] acquiring lock: {Name:mk6dbed29e5e99d89b1cbbd9e561d8f8791ae9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:58.303278  493486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:01:58.305561  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:58.305924  493486 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:01:58.306112  493486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:01:58.306351  493486 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:01:58.306713  493486 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-295154"
	I1129 09:01:58.306776  493486 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-295154"
	I1129 09:01:58.306795  493486 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-295154"
	I1129 09:01:58.306776  493486 config.go:182] Loaded profile config "old-k8s-version-295154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:01:58.306807  493486 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-295154"
	I1129 09:01:58.306834  493486 host.go:66] Checking if "old-k8s-version-295154" exists ...
	I1129 09:01:58.307864  493486 out.go:179] * Verifying Kubernetes components...
	I1129 09:01:58.307930  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:58.308039  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:58.309327  493486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:58.335085  493486 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-295154"
	I1129 09:01:58.335144  493486 host.go:66] Checking if "old-k8s-version-295154" exists ...
	I1129 09:01:58.335642  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:58.337139  493486 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:58.338693  493486 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:01:58.338716  493486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:01:58.338899  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:58.368947  493486 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:01:58.368979  493486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:01:58.369072  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:58.378680  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:58.399464  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:58.438617  493486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:01:58.498671  493486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:01:58.528524  493486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:01:58.536443  493486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:01:58.718007  493486 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 09:01:58.719713  493486 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-295154" to be "Ready" ...
	I1129 09:01:58.976512  493486 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 09:02:01.574795  494126 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:02:01.574869  494126 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:02:01.575071  494126 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:02:01.575154  494126 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:02:01.575204  494126 kubeadm.go:319] OS: Linux
	I1129 09:02:01.575304  494126 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:02:01.575403  494126 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:02:01.575496  494126 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:02:01.575567  494126 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:02:01.575645  494126 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:02:01.575713  494126 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:02:01.575809  494126 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:02:01.575872  494126 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:02:01.575964  494126 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:02:01.576092  494126 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:02:01.576217  494126 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:02:01.576325  494126 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:02:01.578171  494126 out.go:252]   - Generating certificates and keys ...
	I1129 09:02:01.578298  494126 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:02:01.578401  494126 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:02:01.578499  494126 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:02:01.578589  494126 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:02:01.578680  494126 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:02:01.578785  494126 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:02:01.578876  494126 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:02:01.579019  494126 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-924441] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1129 09:02:01.579122  494126 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:02:01.579311  494126 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-924441] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1129 09:02:01.579420  494126 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:02:01.579532  494126 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:02:01.579609  494126 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:02:01.579696  494126 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:02:01.579806  494126 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:02:01.579894  494126 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:02:01.579971  494126 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:02:01.580076  494126 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:02:01.580125  494126 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:02:01.580195  494126 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:02:01.580259  494126 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:02:01.582121  494126 out.go:252]   - Booting up control plane ...
	I1129 09:02:01.582267  494126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:02:01.582364  494126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:02:01.582460  494126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:02:01.582603  494126 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:02:01.582773  494126 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:02:01.582902  494126 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:02:01.583026  494126 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:02:01.583068  494126 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:02:01.583182  494126 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:02:01.583325  494126 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:02:01.583413  494126 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001845652s
	I1129 09:02:01.583537  494126 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:02:01.583671  494126 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1129 09:02:01.583787  494126 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:02:01.583879  494126 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:02:01.583985  494126 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.852889014s
	I1129 09:02:01.584071  494126 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.023243656s
	I1129 09:02:01.584163  494126 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00195345s
	I1129 09:02:01.584314  494126 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:02:01.584493  494126 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:02:01.584584  494126 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:02:01.584867  494126 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-924441 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:02:01.584955  494126 kubeadm.go:319] [bootstrap-token] Using token: mvtuq7.pg2byk8o9fh5nfa2
	I1129 09:02:01.587787  494126 out.go:252]   - Configuring RBAC rules ...
	I1129 09:02:01.587916  494126 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:02:01.588028  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:02:01.588232  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:02:01.588384  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:02:01.588517  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:02:01.588635  494126 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:02:01.588779  494126 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:02:01.588837  494126 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:02:01.588907  494126 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:02:01.588916  494126 kubeadm.go:319] 
	I1129 09:02:01.589016  494126 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:02:01.589032  494126 kubeadm.go:319] 
	I1129 09:02:01.589151  494126 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:02:01.589160  494126 kubeadm.go:319] 
	I1129 09:02:01.589205  494126 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:02:01.589280  494126 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:02:01.589374  494126 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:02:01.589388  494126 kubeadm.go:319] 
	I1129 09:02:01.589465  494126 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:02:01.589473  494126 kubeadm.go:319] 
	I1129 09:02:01.589554  494126 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:02:01.589563  494126 kubeadm.go:319] 
	I1129 09:02:01.589607  494126 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:02:01.589671  494126 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:02:01.589782  494126 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:02:01.589795  494126 kubeadm.go:319] 
	I1129 09:02:01.589906  494126 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:02:01.590049  494126 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:02:01.590058  494126 kubeadm.go:319] 
	I1129 09:02:01.590132  494126 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mvtuq7.pg2byk8o9fh5nfa2 \
	I1129 09:02:01.590268  494126 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 \
	I1129 09:02:01.590302  494126 kubeadm.go:319] 	--control-plane 
	I1129 09:02:01.590309  494126 kubeadm.go:319] 
	I1129 09:02:01.590425  494126 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:02:01.590434  494126 kubeadm.go:319] 
	I1129 09:02:01.590567  494126 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mvtuq7.pg2byk8o9fh5nfa2 \
	I1129 09:02:01.590744  494126 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 
	I1129 09:02:01.590761  494126 cni.go:84] Creating CNI manager for ""
	I1129 09:02:01.590770  494126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:02:01.592367  494126 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:01:58.977447  493486 addons.go:530] duration metric: took 671.096745ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:01:59.226693  493486 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-295154" context rescaled to 1 replicas
	W1129 09:02:00.723077  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	W1129 09:02:02.723240  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	I1129 09:02:01.593492  494126 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:02:01.598544  494126 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:02:01.598567  494126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:02:01.615144  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:02:01.883935  494126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:02:01.884024  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:01.884114  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-924441 minikube.k8s.io/updated_at=2025_11_29T09_02_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=no-preload-924441 minikube.k8s.io/primary=true
	I1129 09:02:01.969638  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:01.982178  494126 ops.go:34] apiserver oom_adj: -16
	I1129 09:02:02.470301  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:02.969878  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:03.470379  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:03.970554  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:04.469853  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:02.669495  460401 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062771993s)
	W1129 09:02:02.669547  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1129 09:02:02.669577  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:02.669596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:02.710559  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:02.710605  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:04.970119  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:05.470767  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:05.969852  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:06.052010  494126 kubeadm.go:1114] duration metric: took 4.168052566s to wait for elevateKubeSystemPrivileges
	I1129 09:02:06.052057  494126 kubeadm.go:403] duration metric: took 14.974666914s to StartCluster
	I1129 09:02:06.052081  494126 settings.go:142] acquiring lock: {Name:mk6dbed29e5e99d89b1cbbd9e561d8f8791ae9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:02:06.052174  494126 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:02:06.054258  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:02:06.054571  494126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:02:06.054563  494126 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:02:06.054635  494126 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:02:06.054874  494126 config.go:182] Loaded profile config "no-preload-924441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:02:06.054888  494126 addons.go:70] Setting storage-provisioner=true in profile "no-preload-924441"
	I1129 09:02:06.054933  494126 addons.go:70] Setting default-storageclass=true in profile "no-preload-924441"
	I1129 09:02:06.054947  494126 addons.go:239] Setting addon storage-provisioner=true in "no-preload-924441"
	I1129 09:02:06.054963  494126 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-924441"
	I1129 09:02:06.055012  494126 host.go:66] Checking if "no-preload-924441" exists ...
	I1129 09:02:06.055424  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:02:06.055667  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:02:06.056967  494126 out.go:179] * Verifying Kubernetes components...
	I1129 09:02:06.060417  494126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:02:06.083076  494126 addons.go:239] Setting addon default-storageclass=true in "no-preload-924441"
	I1129 09:02:06.083127  494126 host.go:66] Checking if "no-preload-924441" exists ...
	I1129 09:02:06.083615  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:02:06.086028  494126 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:02:06.087100  494126 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:02:06.087121  494126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:02:06.087200  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:02:06.110337  494126 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:02:06.110366  494126 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:02:06.111183  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:02:06.116769  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:02:06.140007  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:02:06.151655  494126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:02:06.208406  494126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:02:06.241470  494126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:02:06.273558  494126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:02:06.324896  494126 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1129 09:02:06.327889  494126 node_ready.go:35] waiting up to 6m0s for node "no-preload-924441" to be "Ready" ...
	I1129 09:02:06.574594  494126 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1129 09:02:05.223590  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	W1129 09:02:07.223929  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	I1129 09:02:06.575644  494126 addons.go:530] duration metric: took 521.007476ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:02:06.830448  494126 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-924441" context rescaled to 1 replicas
	W1129 09:02:08.331406  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:05.259668  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:07.201576  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:43246->192.168.85.2:8443: read: connection reset by peer
	I1129 09:02:07.201690  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:07.201778  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:07.234753  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:07.234781  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:02:07.234788  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:07.234793  460401 cri.go:89] found id: ""
	I1129 09:02:07.234804  460401 logs.go:282] 3 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:07.234869  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.240257  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.245641  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.251131  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:07.251196  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:07.280579  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:07.280608  460401 cri.go:89] found id: ""
	I1129 09:02:07.280621  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:07.280682  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.286123  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:07.286213  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:07.317491  460401 cri.go:89] found id: ""
	I1129 09:02:07.317519  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.317528  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:07.317534  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:07.317586  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:07.347513  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:07.347534  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:07.347538  460401 cri.go:89] found id: ""
	I1129 09:02:07.347546  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:07.347610  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.353144  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.358223  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:07.358303  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:07.387488  460401 cri.go:89] found id: ""
	I1129 09:02:07.387516  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.387525  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:07.387532  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:07.387595  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:07.418490  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:07.418512  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:07.418516  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:07.418519  460401 cri.go:89] found id: ""
	I1129 09:02:07.418527  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:07.418587  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.423956  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.429140  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.434196  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:07.434281  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:07.463114  460401 cri.go:89] found id: ""
	I1129 09:02:07.463138  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.463148  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:07.463156  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:07.463222  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:07.494533  460401 cri.go:89] found id: ""
	I1129 09:02:07.494567  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.494579  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:07.494592  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:07.494604  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:07.546238  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:07.546282  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:07.634664  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:07.634702  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:07.696753  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:07.696779  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:07.696796  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:07.733303  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:07.733343  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:07.786770  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:07.786809  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:07.824791  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:07.824831  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:07.857029  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:07.857058  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:07.892009  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:07.892046  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:07.907552  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:02:07.907596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	W1129 09:02:07.937558  460401 logs.go:130] failed kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095": Process exited with status 1
	stdout:
	
	stderr:
	E1129 09:02:07.934436    4413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found" containerID="5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	time="2025-11-29T09:02:07Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found"
	 output: 
	** stderr ** 
	E1129 09:02:07.934436    4413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found" containerID="5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	time="2025-11-29T09:02:07Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found"
	
	** /stderr **
	I1129 09:02:07.937577  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:07.937591  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:07.976501  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:07.976553  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:08.017968  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:08.018008  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:08.049057  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:08.049090  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	W1129 09:02:09.723662  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	W1129 09:02:12.223024  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	I1129 09:02:13.224090  493486 node_ready.go:49] node "old-k8s-version-295154" is "Ready"
	I1129 09:02:13.224128  493486 node_ready.go:38] duration metric: took 14.504358398s for node "old-k8s-version-295154" to be "Ready" ...
	I1129 09:02:13.224148  493486 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:02:13.224211  493486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:02:13.243313  493486 api_server.go:72] duration metric: took 14.93733902s to wait for apiserver process to appear ...
	I1129 09:02:13.243343  493486 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:02:13.243370  493486 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:02:13.250694  493486 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:02:13.251984  493486 api_server.go:141] control plane version: v1.28.0
	I1129 09:02:13.252015  493486 api_server.go:131] duration metric: took 8.663278ms to wait for apiserver health ...
	I1129 09:02:13.252026  493486 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:02:13.255767  493486 system_pods.go:59] 8 kube-system pods found
	I1129 09:02:13.255813  493486 system_pods.go:61] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.255822  493486 system_pods.go:61] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.255829  493486 system_pods.go:61] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.255835  493486 system_pods.go:61] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.255841  493486 system_pods.go:61] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.255847  493486 system_pods.go:61] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.255853  493486 system_pods.go:61] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.255860  493486 system_pods.go:61] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.255869  493486 system_pods.go:74] duration metric: took 3.834915ms to wait for pod list to return data ...
	I1129 09:02:13.255879  493486 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:02:13.259936  493486 default_sa.go:45] found service account: "default"
	I1129 09:02:13.259965  493486 default_sa.go:55] duration metric: took 4.078247ms for default service account to be created ...
	I1129 09:02:13.259977  493486 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:02:13.264489  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:13.264528  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.264536  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.264545  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.264554  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.264562  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.264567  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.264572  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.264586  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.264615  493486 retry.go:31] will retry after 309.906184ms: missing components: kube-dns
	W1129 09:02:10.832100  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	W1129 09:02:13.330706  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:10.584596  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:10.585082  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:10.585139  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:10.585192  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:10.615813  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:10.615833  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:10.615837  460401 cri.go:89] found id: ""
	I1129 09:02:10.615846  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:10.615910  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.621079  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.625927  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:10.626017  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:10.655780  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:10.655808  460401 cri.go:89] found id: ""
	I1129 09:02:10.655817  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:10.655877  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.661197  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:10.661278  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:10.692401  460401 cri.go:89] found id: ""
	I1129 09:02:10.692423  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.692431  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:10.692436  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:10.692496  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:10.721278  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:10.721303  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:10.721309  460401 cri.go:89] found id: ""
	I1129 09:02:10.721320  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:10.721387  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.726913  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.731556  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:10.731637  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:10.759345  460401 cri.go:89] found id: ""
	I1129 09:02:10.759373  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.759381  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:10.759386  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:10.759446  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:10.790190  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:10.790215  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:10.790221  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:10.790226  460401 cri.go:89] found id: ""
	I1129 09:02:10.790236  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:10.790305  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.795588  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.800622  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.805263  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:10.805338  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:10.834942  460401 cri.go:89] found id: ""
	I1129 09:02:10.834973  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.834991  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:10.834999  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:10.835065  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:10.872503  460401 cri.go:89] found id: ""
	I1129 09:02:10.872536  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.872547  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:10.872562  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:10.872586  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:10.926644  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:10.926681  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:10.965025  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:10.965069  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:10.998068  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:10.998102  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:11.043686  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:11.043743  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:11.134380  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:11.134422  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:11.150475  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:11.150510  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:11.210329  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:11.210348  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:11.210364  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:11.250422  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:11.250457  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:11.280219  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:11.280255  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:11.315565  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:11.315596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:11.349327  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:11.349358  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:11.384696  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:11.384729  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:13.923850  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:13.924341  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:13.924398  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:13.924461  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:13.954410  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:13.954430  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:13.954434  460401 cri.go:89] found id: ""
	I1129 09:02:13.954442  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:13.954501  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:13.959624  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:13.964312  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:13.964377  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:13.992596  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:13.992625  460401 cri.go:89] found id: ""
	I1129 09:02:13.992636  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:13.992703  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:13.998893  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:13.998972  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:14.028106  460401 cri.go:89] found id: ""
	I1129 09:02:14.028140  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.028152  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:14.028161  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:14.028230  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:14.057393  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:14.057414  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:14.057418  460401 cri.go:89] found id: ""
	I1129 09:02:14.057427  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:14.057482  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.062623  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.067579  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:14.067654  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:14.102801  460401 cri.go:89] found id: ""
	I1129 09:02:14.102840  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.102853  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:14.102860  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:14.102925  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:14.135951  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:14.135979  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:14.135985  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:14.135988  460401 cri.go:89] found id: ""
	I1129 09:02:14.135998  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:14.136064  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.141983  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.147316  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.152463  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:14.152555  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:14.181365  460401 cri.go:89] found id: ""
	I1129 09:02:14.181398  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.181409  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:14.181417  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:14.181477  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:14.210267  460401 cri.go:89] found id: ""
	I1129 09:02:14.210292  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.210300  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:14.210310  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:14.210323  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:14.298625  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:14.298662  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:14.315504  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:14.315529  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:14.357098  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:14.357134  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:14.407082  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:14.407133  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:14.441442  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:14.441482  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:14.476419  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:14.476452  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:13.579150  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:13.579183  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.579189  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.579195  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.579199  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.579203  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.579206  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.579210  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.579220  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.579237  493486 retry.go:31] will retry after 360.039109ms: missing components: kube-dns
	I1129 09:02:13.944039  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:13.944084  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.944094  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.944104  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.944110  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.944116  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.944121  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.944127  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.944133  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.944166  493486 retry.go:31] will retry after 339.658127ms: missing components: kube-dns
	I1129 09:02:14.288499  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:14.288533  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Running
	I1129 09:02:14.288543  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:14.288548  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:14.288553  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:14.288563  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:14.288568  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:14.288573  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:14.288578  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Running
	I1129 09:02:14.288588  493486 system_pods.go:126] duration metric: took 1.028603527s to wait for k8s-apps to be running ...
	I1129 09:02:14.288601  493486 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:02:14.288662  493486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:02:14.302535  493486 system_svc.go:56] duration metric: took 13.922382ms WaitForService to wait for kubelet
	I1129 09:02:14.302570  493486 kubeadm.go:587] duration metric: took 15.996603485s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:02:14.302594  493486 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:02:14.305508  493486 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:02:14.305535  493486 node_conditions.go:123] node cpu capacity is 8
	I1129 09:02:14.305552  493486 node_conditions.go:105] duration metric: took 2.953214ms to run NodePressure ...
	I1129 09:02:14.305564  493486 start.go:242] waiting for startup goroutines ...
	I1129 09:02:14.305570  493486 start.go:247] waiting for cluster config update ...
	I1129 09:02:14.305583  493486 start.go:256] writing updated cluster config ...
	I1129 09:02:14.305887  493486 ssh_runner.go:195] Run: rm -f paused
	I1129 09:02:14.309803  493486 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:14.314558  493486 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-phw28" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.319446  493486 pod_ready.go:94] pod "coredns-5dd5756b68-phw28" is "Ready"
	I1129 09:02:14.319479  493486 pod_ready.go:86] duration metric: took 4.889509ms for pod "coredns-5dd5756b68-phw28" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.322499  493486 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.326608  493486 pod_ready.go:94] pod "etcd-old-k8s-version-295154" is "Ready"
	I1129 09:02:14.326631  493486 pod_ready.go:86] duration metric: took 4.109693ms for pod "etcd-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.329352  493486 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.333844  493486 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-295154" is "Ready"
	I1129 09:02:14.333867  493486 pod_ready.go:86] duration metric: took 4.49688ms for pod "kube-apiserver-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.336686  493486 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.714439  493486 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-295154" is "Ready"
	I1129 09:02:14.714472  493486 pod_ready.go:86] duration metric: took 377.765984ms for pod "kube-controller-manager-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.915822  493486 pod_ready.go:83] waiting for pod "kube-proxy-4rfb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.314552  493486 pod_ready.go:94] pod "kube-proxy-4rfb4" is "Ready"
	I1129 09:02:15.314586  493486 pod_ready.go:86] duration metric: took 398.736001ms for pod "kube-proxy-4rfb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.515989  493486 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.913869  493486 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-295154" is "Ready"
	I1129 09:02:15.913896  493486 pod_ready.go:86] duration metric: took 397.877691ms for pod "kube-scheduler-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.913908  493486 pod_ready.go:40] duration metric: took 1.604073956s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:15.959941  493486 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1129 09:02:15.961883  493486 out.go:203] 
	W1129 09:02:15.963183  493486 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1129 09:02:15.964449  493486 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1129 09:02:15.966035  493486 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-295154" cluster and "default" namespace by default
	W1129 09:02:15.330798  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	W1129 09:02:17.331851  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:14.509454  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:14.509484  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:14.571273  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:14.571298  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:14.571312  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:14.605440  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:14.605476  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:14.642678  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:14.642712  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:14.671483  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:14.671514  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:14.701619  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:14.701647  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:17.246912  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:17.247337  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:17.247422  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:17.247479  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:17.277610  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:17.277632  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:17.277637  460401 cri.go:89] found id: ""
	I1129 09:02:17.277647  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:17.277711  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.283531  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.288554  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:17.288644  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:17.316819  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:17.316847  460401 cri.go:89] found id: ""
	I1129 09:02:17.316857  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:17.316921  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.322640  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:17.322770  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:17.353531  460401 cri.go:89] found id: ""
	I1129 09:02:17.353563  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.353575  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:17.353585  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:17.353651  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:17.384830  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:17.384854  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:17.384858  460401 cri.go:89] found id: ""
	I1129 09:02:17.384867  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:17.384932  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.390132  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.395096  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:17.395177  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:17.425643  460401 cri.go:89] found id: ""
	I1129 09:02:17.425681  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.425692  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:17.425704  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:17.425788  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:17.456077  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:17.456105  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:17.456113  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:17.456136  460401 cri.go:89] found id: ""
	I1129 09:02:17.456148  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:17.456213  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.461610  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.466727  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.471762  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:17.471849  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:17.501750  460401 cri.go:89] found id: ""
	I1129 09:02:17.501782  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.501793  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:17.501801  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:17.501868  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:17.531903  460401 cri.go:89] found id: ""
	I1129 09:02:17.531932  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.531942  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:17.531956  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:17.531972  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:17.630517  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:17.630566  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:17.667169  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:17.667205  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:17.707311  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:17.707360  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:17.746580  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:17.746621  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:17.799162  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:17.799207  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:17.839313  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:17.839355  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:17.872700  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:17.872742  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:17.904806  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:17.904838  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:17.920866  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:17.920904  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:17.983002  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:17.983027  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:17.983040  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:18.019203  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:18.019241  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:18.070893  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:18.070936  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1129 09:02:19.830479  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:20.833313  494126 node_ready.go:49] node "no-preload-924441" is "Ready"
	I1129 09:02:20.833355  494126 node_ready.go:38] duration metric: took 14.505431475s for node "no-preload-924441" to be "Ready" ...
	I1129 09:02:20.833377  494126 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:02:20.833445  494126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:02:20.850134  494126 api_server.go:72] duration metric: took 14.795523765s to wait for apiserver process to appear ...
	I1129 09:02:20.850165  494126 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:02:20.850190  494126 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1129 09:02:20.856514  494126 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1129 09:02:20.857900  494126 api_server.go:141] control plane version: v1.34.1
	I1129 09:02:20.857933  494126 api_server.go:131] duration metric: took 7.759312ms to wait for apiserver health ...
	I1129 09:02:20.857945  494126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:02:20.861811  494126 system_pods.go:59] 8 kube-system pods found
	I1129 09:02:20.861851  494126 system_pods.go:61] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:20.861863  494126 system_pods.go:61] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:20.861871  494126 system_pods.go:61] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:20.861877  494126 system_pods.go:61] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:20.861892  494126 system_pods.go:61] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:20.861897  494126 system_pods.go:61] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:20.861902  494126 system_pods.go:61] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:20.861912  494126 system_pods.go:61] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:20.861920  494126 system_pods.go:74] duration metric: took 3.967151ms to wait for pod list to return data ...
	I1129 09:02:20.861931  494126 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:02:20.864542  494126 default_sa.go:45] found service account: "default"
	I1129 09:02:20.864569  494126 default_sa.go:55] duration metric: took 2.631761ms for default service account to be created ...
	I1129 09:02:20.864581  494126 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:02:20.867876  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:20.867913  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:20.867924  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:20.867932  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:20.867938  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:20.867999  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:20.868005  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:20.868011  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:20.868027  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:20.868077  494126 retry.go:31] will retry after 292.54579ms: missing components: kube-dns
	I1129 09:02:21.165357  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:21.165399  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:21.165408  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:21.165416  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:21.165422  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:21.165428  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:21.165434  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:21.165439  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:21.165449  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:21.165470  494126 retry.go:31] will retry after 336.406198ms: missing components: kube-dns
	I1129 09:02:21.505471  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:21.505510  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:21.505516  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:21.505524  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:21.505528  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:21.505531  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:21.505534  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:21.505538  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:21.505542  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:21.505560  494126 retry.go:31] will retry after 447.535618ms: missing components: kube-dns
	I1129 09:02:21.957409  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:21.957439  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Running
	I1129 09:02:21.957444  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:21.957448  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:21.957451  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:21.957456  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:21.957459  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:21.957464  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:21.957467  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Running
	I1129 09:02:21.957476  494126 system_pods.go:126] duration metric: took 1.092887723s to wait for k8s-apps to be running ...
	I1129 09:02:21.957498  494126 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:02:21.957549  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:02:21.971582  494126 system_svc.go:56] duration metric: took 14.071974ms WaitForService to wait for kubelet
	I1129 09:02:21.971613  494126 kubeadm.go:587] duration metric: took 15.917009838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:02:21.971632  494126 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:02:21.974426  494126 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:02:21.974453  494126 node_conditions.go:123] node cpu capacity is 8
	I1129 09:02:21.974471  494126 node_conditions.go:105] duration metric: took 2.83418ms to run NodePressure ...
	I1129 09:02:21.974485  494126 start.go:242] waiting for startup goroutines ...
	I1129 09:02:21.974492  494126 start.go:247] waiting for cluster config update ...
	I1129 09:02:21.974502  494126 start.go:256] writing updated cluster config ...
	I1129 09:02:21.974780  494126 ssh_runner.go:195] Run: rm -f paused
	I1129 09:02:21.978967  494126 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:21.982434  494126 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nsh8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.986370  494126 pod_ready.go:94] pod "coredns-66bc5c9577-nsh8w" is "Ready"
	I1129 09:02:21.986395  494126 pod_ready.go:86] duration metric: took 3.939701ms for pod "coredns-66bc5c9577-nsh8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.988365  494126 pod_ready.go:83] waiting for pod "etcd-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.991850  494126 pod_ready.go:94] pod "etcd-no-preload-924441" is "Ready"
	I1129 09:02:21.991874  494126 pod_ready.go:86] duration metric: took 3.486388ms for pod "etcd-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.993587  494126 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.997072  494126 pod_ready.go:94] pod "kube-apiserver-no-preload-924441" is "Ready"
	I1129 09:02:21.997092  494126 pod_ready.go:86] duration metric: took 3.484304ms for pod "kube-apiserver-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.998698  494126 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:22.382918  494126 pod_ready.go:94] pod "kube-controller-manager-no-preload-924441" is "Ready"
	I1129 09:02:22.382948  494126 pod_ready.go:86] duration metric: took 384.232783ms for pod "kube-controller-manager-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:22.583125  494126 pod_ready.go:83] waiting for pod "kube-proxy-96fcg" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:22.982608  494126 pod_ready.go:94] pod "kube-proxy-96fcg" is "Ready"
	I1129 09:02:22.982639  494126 pod_ready.go:86] duration metric: took 399.48383ms for pod "kube-proxy-96fcg" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:23.184031  494126 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:23.583027  494126 pod_ready.go:94] pod "kube-scheduler-no-preload-924441" is "Ready"
	I1129 09:02:23.583058  494126 pod_ready.go:86] duration metric: took 399.00134ms for pod "kube-scheduler-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:23.583071  494126 pod_ready.go:40] duration metric: took 1.604064431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:23.632822  494126 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:02:23.634677  494126 out.go:179] * Done! kubectl is now configured to use "no-preload-924441" cluster and "default" namespace by default
	I1129 09:02:20.607959  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:20.608406  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:20.608469  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:20.608531  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:20.639116  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:20.639148  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:20.639155  460401 cri.go:89] found id: ""
	I1129 09:02:20.639168  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:20.639240  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.644749  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.649347  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:20.649411  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:20.677383  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:20.677404  460401 cri.go:89] found id: ""
	I1129 09:02:20.677413  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:20.677466  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.682625  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:20.682708  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:20.711021  460401 cri.go:89] found id: ""
	I1129 09:02:20.711050  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.711060  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:20.711070  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:20.711138  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:20.745598  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:20.745626  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:20.745632  460401 cri.go:89] found id: ""
	I1129 09:02:20.745643  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:20.745716  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.751838  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.757804  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:20.757881  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:20.793640  460401 cri.go:89] found id: ""
	I1129 09:02:20.793671  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.793683  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:20.793691  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:20.793792  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:20.830071  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:20.830099  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:20.830104  460401 cri.go:89] found id: ""
	I1129 09:02:20.830114  460401 logs.go:282] 2 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:20.830179  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.837576  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.843146  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:20.843225  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:20.883480  460401 cri.go:89] found id: ""
	I1129 09:02:20.883525  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.883536  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:20.883543  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:20.883598  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:20.923499  460401 cri.go:89] found id: ""
	I1129 09:02:20.923532  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.923543  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:20.923557  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:20.923574  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:20.961675  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:20.961713  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:20.996489  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:20.996524  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:21.046535  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:21.046596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:21.131239  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:21.131286  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:21.192537  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:21.192557  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:21.192573  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:21.227894  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:21.227932  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:21.262592  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:21.262632  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:21.298034  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:21.298076  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:21.313593  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:21.313626  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:21.355840  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:21.355878  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:21.409528  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:21.409570  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:23.946261  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:23.946794  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:23.946872  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:23.946940  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:23.978496  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:23.978521  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:23.978525  460401 cri.go:89] found id: ""
	I1129 09:02:23.978533  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:23.978585  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:23.983820  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:23.988502  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:23.988563  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:24.017479  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:24.017505  460401 cri.go:89] found id: ""
	I1129 09:02:24.017516  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:24.017581  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.022978  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:24.023049  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:24.054017  460401 cri.go:89] found id: ""
	I1129 09:02:24.054042  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.054049  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:24.054055  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:24.054104  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:24.083682  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:24.083704  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:24.083710  460401 cri.go:89] found id: ""
	I1129 09:02:24.083720  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:24.083797  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.089191  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.094144  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:24.094223  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:24.123931  460401 cri.go:89] found id: ""
	I1129 09:02:24.123956  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.123964  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:24.123972  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:24.124032  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:24.158678  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:24.158704  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:24.158710  460401 cri.go:89] found id: ""
	I1129 09:02:24.158721  460401 logs.go:282] 2 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:24.158824  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.164380  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.170117  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:24.170196  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:24.202016  460401 cri.go:89] found id: ""
	I1129 09:02:24.202057  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.202066  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:24.202072  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:24.202123  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:24.235359  460401 cri.go:89] found id: ""
	I1129 09:02:24.235388  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.235399  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:24.235412  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:24.235427  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:24.327121  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:24.327167  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:24.380608  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:24.380651  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:24.411895  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:24.411923  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:24.450543  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:24.450575  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:24.500105  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:24.500146  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:24.534213  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:24.534244  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:24.548977  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:24.549027  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:24.610946  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:24.610979  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:24.610995  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:24.646378  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:24.646412  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:24.681683  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:24.681724  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:24.720949  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:24.720984  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:27.257815  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:27.258260  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:27.258319  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:27.258379  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:27.293527  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:27.293551  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:27.293555  460401 cri.go:89] found id: ""
	I1129 09:02:27.293565  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:27.293624  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.299010  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.303563  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:27.303630  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:27.333820  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:27.333843  460401 cri.go:89] found id: ""
	I1129 09:02:27.333854  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:27.333911  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.339591  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:27.339665  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:27.371040  460401 cri.go:89] found id: ""
	I1129 09:02:27.371072  460401 logs.go:282] 0 containers: []
	W1129 09:02:27.371092  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:27.371100  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:27.371156  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:27.404567  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:27.404591  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:27.404598  460401 cri.go:89] found id: ""
	I1129 09:02:27.404609  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:27.404679  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.411018  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.416301  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:27.416384  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:27.448123  460401 cri.go:89] found id: ""
	I1129 09:02:27.448154  460401 logs.go:282] 0 containers: []
	W1129 09:02:27.448166  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:27.448174  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:27.448239  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:27.479204  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:27.479228  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:27.479233  460401 cri.go:89] found id: ""
	I1129 09:02:27.479243  460401 logs.go:282] 2 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:27.479299  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.485023  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.490034  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:27.490099  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:27.522830  460401 cri.go:89] found id: ""
	I1129 09:02:27.522862  460401 logs.go:282] 0 containers: []
	W1129 09:02:27.522872  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:27.522880  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:27.522940  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:27.556537  460401 cri.go:89] found id: ""
	I1129 09:02:27.556565  460401 logs.go:282] 0 containers: []
	W1129 09:02:27.556576  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:27.556589  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:27.556606  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:27.573324  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:27.573353  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:27.639338  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:27.639361  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:27.639380  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:27.675020  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:27.675050  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:27.723155  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:27.723191  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:27.762423  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:27.762453  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:27.793598  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:27.793627  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:27.858089  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:27.858122  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:27.895696  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:27.895746  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:28.002060  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:28.002103  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:28.050250  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:28.050287  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:28.108778  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:28.108830  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bb9fb2e713bd5       56cc512116c8f       8 seconds ago       Running             busybox                   0                   864d85bb8c066       busybox                                     default
	5edc79817e8ae       52546a367cc9e       14 seconds ago      Running             coredns                   0                   b4bf38030bbc6       coredns-66bc5c9577-nsh8w                    kube-system
	07f73647c6425       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   13cafb453dbf6       storage-provisioner                         kube-system
	b3f766ac9f956       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   13d395db41ff5       kindnet-nscfk                               kube-system
	ff4ea2e8a24f9       fc25172553d79       28 seconds ago      Running             kube-proxy                0                   2dcc97f747328       kube-proxy-96fcg                            kube-system
	f8f46516dbe28       c80c8dbafe7dd       38 seconds ago      Running             kube-controller-manager   0                   d29a4696be107       kube-controller-manager-no-preload-924441   kube-system
	383685f5bf643       c3994bc696102       38 seconds ago      Running             kube-apiserver            0                   ec2efda1f0917       kube-apiserver-no-preload-924441            kube-system
	ab8fc300ad1ef       5f1f5298c888d       38 seconds ago      Running             etcd                      0                   e5b8283f11801       etcd-no-preload-924441                      kube-system
	ee9669cc467e6       7dd6aaa1717ab       38 seconds ago      Running             kube-scheduler            0                   78738700c9426       kube-scheduler-no-preload-924441            kube-system
	
	
	==> containerd <==
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.839889807Z" level=info msg="Container 5edc79817e8ae8a5c88ab5c346145ff5aedbeedf18d092fb82c27a1bb984a93c: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.846487617Z" level=info msg="CreateContainer within sandbox \"13cafb453dbf625e29c8df581ed06b593e1a0c42d541d44342df98eeeff068f9\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"07f73647c64253486a8c6bcde1efc5cf43486a9cb6d0209e28918468208ad47c\""
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.847189321Z" level=info msg="StartContainer for \"07f73647c64253486a8c6bcde1efc5cf43486a9cb6d0209e28918468208ad47c\""
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.848794818Z" level=info msg="connecting to shim 07f73647c64253486a8c6bcde1efc5cf43486a9cb6d0209e28918468208ad47c" address="unix:///run/containerd/s/eddfe8d240380d848bdacc10ce1bae9eedf4156bdf79362fe1df71f1b2f642b1" protocol=ttrpc version=3
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.850361090Z" level=info msg="CreateContainer within sandbox \"b4bf38030bbc62b2a8208ad75f3e67bc668615f9e522d03471806f535c7bb145\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5edc79817e8ae8a5c88ab5c346145ff5aedbeedf18d092fb82c27a1bb984a93c\""
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.851438042Z" level=info msg="StartContainer for \"5edc79817e8ae8a5c88ab5c346145ff5aedbeedf18d092fb82c27a1bb984a93c\""
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.855688040Z" level=info msg="connecting to shim 5edc79817e8ae8a5c88ab5c346145ff5aedbeedf18d092fb82c27a1bb984a93c" address="unix:///run/containerd/s/5ec5d1cddc9bd9035fa8847c5ff116cb50e88f64af67fe9931bde5d7bff42b20" protocol=ttrpc version=3
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.914182629Z" level=info msg="StartContainer for \"07f73647c64253486a8c6bcde1efc5cf43486a9cb6d0209e28918468208ad47c\" returns successfully"
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.918459134Z" level=info msg="StartContainer for \"5edc79817e8ae8a5c88ab5c346145ff5aedbeedf18d092fb82c27a1bb984a93c\" returns successfully"
	Nov 29 09:02:24 no-preload-924441 containerd[660]: time="2025-11-29T09:02:24.108622763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:26d445de-fc0e-4bc8-adac-935cd86ee75c,Namespace:default,Attempt:0,}"
	Nov 29 09:02:24 no-preload-924441 containerd[660]: time="2025-11-29T09:02:24.154864619Z" level=info msg="connecting to shim 864d85bb8c06624ebece8af29e58fcc4bb5ace7f92a5183c4a475044dd50d812" address="unix:///run/containerd/s/2e05430291980b4d6bf0132c253a183cf23ed974be9153d3634e00731e9afe21" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:02:24 no-preload-924441 containerd[660]: time="2025-11-29T09:02:24.229979898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:26d445de-fc0e-4bc8-adac-935cd86ee75c,Namespace:default,Attempt:0,} returns sandbox id \"864d85bb8c06624ebece8af29e58fcc4bb5ace7f92a5183c4a475044dd50d812\""
	Nov 29 09:02:24 no-preload-924441 containerd[660]: time="2025-11-29T09:02:24.232344242Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.649879117Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.650753960Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396645"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.651900992Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.653656808Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.654037410Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.421638016s"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.654078474Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.657925505Z" level=info msg="CreateContainer within sandbox \"864d85bb8c06624ebece8af29e58fcc4bb5ace7f92a5183c4a475044dd50d812\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.665693746Z" level=info msg="Container bb9fb2e713bd50cb79f5d0f55e6c71417f53e295c33a00d17b6626aa73517ffa: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.671190023Z" level=info msg="CreateContainer within sandbox \"864d85bb8c06624ebece8af29e58fcc4bb5ace7f92a5183c4a475044dd50d812\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"bb9fb2e713bd50cb79f5d0f55e6c71417f53e295c33a00d17b6626aa73517ffa\""
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.671809727Z" level=info msg="StartContainer for \"bb9fb2e713bd50cb79f5d0f55e6c71417f53e295c33a00d17b6626aa73517ffa\""
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.672789903Z" level=info msg="connecting to shim bb9fb2e713bd50cb79f5d0f55e6c71417f53e295c33a00d17b6626aa73517ffa" address="unix:///run/containerd/s/2e05430291980b4d6bf0132c253a183cf23ed974be9153d3634e00731e9afe21" protocol=ttrpc version=3
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.725390423Z" level=info msg="StartContainer for \"bb9fb2e713bd50cb79f5d0f55e6c71417f53e295c33a00d17b6626aa73517ffa\" returns successfully"
	
	
	==> coredns [5edc79817e8ae8a5c88ab5c346145ff5aedbeedf18d092fb82c27a1bb984a93c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39893 - 39917 "HINFO IN 7141279770989079680.5485495748569769835. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030214653s
	
	
	==> describe nodes <==
	Name:               no-preload-924441
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-924441
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=no-preload-924441
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_02_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:01:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-924441
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:02:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:02:31 +0000   Sat, 29 Nov 2025 09:01:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:02:31 +0000   Sat, 29 Nov 2025 09:01:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:02:31 +0000   Sat, 29 Nov 2025 09:01:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:02:31 +0000   Sat, 29 Nov 2025 09:02:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-924441
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c7ceb567-1fa1-4ee0-a6f1-0da5aaa1749f
	  Boot ID:                    b81dce2f-73d5-4349-b473-aa1210058cb8
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-nsh8w                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-no-preload-924441                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-nscfk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-924441             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-924441    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-96fcg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-924441             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 35s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  35s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  35s   kubelet          Node no-preload-924441 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s   kubelet          Node no-preload-924441 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s   kubelet          Node no-preload-924441 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node no-preload-924441 event: Registered Node no-preload-924441 in Controller
	  Normal  NodeReady                15s   kubelet          Node no-preload-924441 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 07:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001881] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084003] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.378167] i8042: Warning: Keylock active
	[  +0.012106] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.460417] block sda: the capability attribute has been deprecated.
	[  +0.079627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021012] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.285522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [ab8fc300ad1ef7c9eaf3026a19c133b72463317f50802b7b0376a78df36cd618] <==
	{"level":"warn","ts":"2025-11-29T09:01:57.295570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.304542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.312406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.322990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.331598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.340488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.349394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.365924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.376234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.384804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.399554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.405419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.415385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.423031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.430700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.439764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.449377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.457199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.464780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.479306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.487062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.511769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.518539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.575545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36582","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T09:01:59.237788Z","caller":"traceutil/trace.go:172","msg":"trace[1580144077] transaction","detail":"{read_only:false; response_revision:80; number_of_response:1; }","duration":"151.091667ms","start":"2025-11-29T09:01:59.086619Z","end":"2025-11-29T09:01:59.237711Z","steps":["trace[1580144077] 'process raft request'  (duration: 64.671777ms)","trace[1580144077] 'compare'  (duration: 86.18899ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:02:35 up  1:44,  0 user,  load average: 2.43, 2.77, 12.32
	Linux no-preload-924441 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b3f766ac9f956727596072f40e76311c158de9cfd27a4fee708265933fe75040] <==
	I1129 09:02:10.078647       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:02:10.078936       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1129 09:02:10.079096       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:02:10.079115       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:02:10.079137       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:02:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:02:10.281933       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:02:10.281951       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:02:10.281959       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:02:10.282224       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:02:10.591456       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:02:10.591490       1 metrics.go:72] Registering metrics
	I1129 09:02:10.591605       1 controller.go:711] "Syncing nftables rules"
	I1129 09:02:20.285846       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:02:20.285905       1 main.go:301] handling current node
	I1129 09:02:30.282268       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:02:30.282320       1 main.go:301] handling current node
	
	
	==> kube-apiserver [383685f5bf6438d0f7ebd7a2a386df6adcee57fe778b3e1c03d8bf71aeff5355] <==
	I1129 09:01:58.133562       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:01:58.133922       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 09:01:58.136307       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:01:58.140436       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:01:58.147718       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:01:58.148746       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:01:58.157520       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:01:59.027165       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:01:59.031139       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:01:59.031159       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:01:59.660873       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:01:59.695141       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:01:59.831242       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:01:59.838074       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1129 09:01:59.839237       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:01:59.842982       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:02:00.038311       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:02:00.973644       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:02:00.984905       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:02:00.992034       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:02:05.789882       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:02:05.992413       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1129 09:02:06.094591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:02:06.101669       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1129 09:02:33.900359       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:59672: use of closed network connection
	
	
	==> kube-controller-manager [f8f46516dbe2804e7cc2ef18e7ab9f61630c8861fb3068698765425112e7b9fb] <==
	I1129 09:02:04.998079       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 09:02:04.998093       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 09:02:04.998102       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 09:02:05.004334       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-924441" podCIDRs=["10.244.0.0/24"]
	I1129 09:02:05.038460       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:02:05.038484       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:02:05.038496       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:02:05.038520       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:02:05.038550       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:02:05.038589       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:02:05.038646       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:02:05.038661       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:02:05.038774       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:02:05.038663       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:02:05.040329       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:02:05.040370       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:02:05.040394       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:02:05.041025       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 09:02:05.041537       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:02:05.042726       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:02:05.043836       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:02:05.047458       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:02:05.053705       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:02:05.054799       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 09:02:24.989479       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ff4ea2e8a24f908f96cbd9a880011ea3baa8a548bacc2844c238189376f25019] <==
	I1129 09:02:06.631078       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:02:06.700886       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:02:06.801888       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:02:06.801923       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1129 09:02:06.802035       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:02:06.825814       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:02:06.825893       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:02:06.832862       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:02:06.833334       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:02:06.833374       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:02:06.837208       1 config.go:200] "Starting service config controller"
	I1129 09:02:06.837238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:02:06.837350       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:02:06.837548       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:02:06.837565       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:02:06.837762       1 config.go:309] "Starting node config controller"
	I1129 09:02:06.837783       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:02:06.837789       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:02:06.838082       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:02:06.937437       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:02:06.937883       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:02:06.939103       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ee9669cc467e6d964524ce24464caca9bf8524a5a97a7275b088e9fd74ac089e] <==
	E1129 09:01:58.182270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:01:58.182280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:01:58.182265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:01:58.182367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:01:58.182441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:01:58.182521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:01:58.182556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:01:58.182601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:01:58.182634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:01:58.182680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:01:59.028399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:01:59.095025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:01:59.159692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:01:59.199792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:01:59.235130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:01:59.248432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:01:59.301093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 09:01:59.303111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:01:59.319306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:01:59.335448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:01:59.348897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:01:59.348897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:01:59.402719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:01:59.428255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1129 09:02:02.175664       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:02:01 no-preload-924441 kubelet[2148]: I1129 09:02:01.864854    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-924441" podStartSLOduration=1.8648393859999999 podStartE2EDuration="1.864839386s" podCreationTimestamp="2025-11-29 09:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:01.864718526 +0000 UTC m=+1.148784794" watchObservedRunningTime="2025-11-29 09:02:01.864839386 +0000 UTC m=+1.148905656"
	Nov 29 09:02:01 no-preload-924441 kubelet[2148]: I1129 09:02:01.884266    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-924441" podStartSLOduration=1.8842352629999999 podStartE2EDuration="1.884235263s" podCreationTimestamp="2025-11-29 09:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:01.875079325 +0000 UTC m=+1.159145596" watchObservedRunningTime="2025-11-29 09:02:01.884235263 +0000 UTC m=+1.168301535"
	Nov 29 09:02:01 no-preload-924441 kubelet[2148]: I1129 09:02:01.897207    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-924441" podStartSLOduration=1.897186102 podStartE2EDuration="1.897186102s" podCreationTimestamp="2025-11-29 09:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:01.884627868 +0000 UTC m=+1.168694138" watchObservedRunningTime="2025-11-29 09:02:01.897186102 +0000 UTC m=+1.181252370"
	Nov 29 09:02:01 no-preload-924441 kubelet[2148]: I1129 09:02:01.897352    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-924441" podStartSLOduration=1.897346712 podStartE2EDuration="1.897346712s" podCreationTimestamp="2025-11-29 09:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:01.896770879 +0000 UTC m=+1.180837150" watchObservedRunningTime="2025-11-29 09:02:01.897346712 +0000 UTC m=+1.181412983"
	Nov 29 09:02:05 no-preload-924441 kubelet[2148]: I1129 09:02:05.036551    2148 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:02:05 no-preload-924441 kubelet[2148]: I1129 09:02:05.037374    2148 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.020008    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9fd8592-2ec4-4da3-a800-b136c118d379-kube-proxy\") pod \"kube-proxy-96fcg\" (UID: \"c9fd8592-2ec4-4da3-a800-b136c118d379\") " pod="kube-system/kube-proxy-96fcg"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.020054    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9fd8592-2ec4-4da3-a800-b136c118d379-xtables-lock\") pod \"kube-proxy-96fcg\" (UID: \"c9fd8592-2ec4-4da3-a800-b136c118d379\") " pod="kube-system/kube-proxy-96fcg"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.020076    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9fd8592-2ec4-4da3-a800-b136c118d379-lib-modules\") pod \"kube-proxy-96fcg\" (UID: \"c9fd8592-2ec4-4da3-a800-b136c118d379\") " pod="kube-system/kube-proxy-96fcg"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.020096    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhxc7\" (UniqueName: \"kubernetes.io/projected/c9fd8592-2ec4-4da3-a800-b136c118d379-kube-api-access-vhxc7\") pod \"kube-proxy-96fcg\" (UID: \"c9fd8592-2ec4-4da3-a800-b136c118d379\") " pod="kube-system/kube-proxy-96fcg"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.120995    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/052c2152-0369-4121-b2fe-25b79a00145a-xtables-lock\") pod \"kindnet-nscfk\" (UID: \"052c2152-0369-4121-b2fe-25b79a00145a\") " pod="kube-system/kindnet-nscfk"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.121077    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh679\" (UniqueName: \"kubernetes.io/projected/052c2152-0369-4121-b2fe-25b79a00145a-kube-api-access-nh679\") pod \"kindnet-nscfk\" (UID: \"052c2152-0369-4121-b2fe-25b79a00145a\") " pod="kube-system/kindnet-nscfk"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.121138    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/052c2152-0369-4121-b2fe-25b79a00145a-lib-modules\") pod \"kindnet-nscfk\" (UID: \"052c2152-0369-4121-b2fe-25b79a00145a\") " pod="kube-system/kindnet-nscfk"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.121165    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/052c2152-0369-4121-b2fe-25b79a00145a-cni-cfg\") pod \"kindnet-nscfk\" (UID: \"052c2152-0369-4121-b2fe-25b79a00145a\") " pod="kube-system/kindnet-nscfk"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.857055    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-96fcg" podStartSLOduration=1.857034866 podStartE2EDuration="1.857034866s" podCreationTimestamp="2025-11-29 09:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:06.856894235 +0000 UTC m=+6.140960503" watchObservedRunningTime="2025-11-29 09:02:06.857034866 +0000 UTC m=+6.141101133"
	Nov 29 09:02:10 no-preload-924441 kubelet[2148]: I1129 09:02:10.863762    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nscfk" podStartSLOduration=2.927795631 podStartE2EDuration="5.863713725s" podCreationTimestamp="2025-11-29 09:02:05 +0000 UTC" firstStartedPulling="2025-11-29 09:02:06.840294009 +0000 UTC m=+6.124360268" lastFinishedPulling="2025-11-29 09:02:09.776212102 +0000 UTC m=+9.060278362" observedRunningTime="2025-11-29 09:02:10.863649897 +0000 UTC m=+10.147716166" watchObservedRunningTime="2025-11-29 09:02:10.863713725 +0000 UTC m=+10.147779993"
	Nov 29 09:02:20 no-preload-924441 kubelet[2148]: I1129 09:02:20.381108    2148 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:02:20 no-preload-924441 kubelet[2148]: I1129 09:02:20.530227    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf2a8ab9-aaca-4ee6-a390-a02099f693d9-config-volume\") pod \"coredns-66bc5c9577-nsh8w\" (UID: \"bf2a8ab9-aaca-4ee6-a390-a02099f693d9\") " pod="kube-system/coredns-66bc5c9577-nsh8w"
	Nov 29 09:02:20 no-preload-924441 kubelet[2148]: I1129 09:02:20.530273    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/88b64cf8-3233-47bb-be31-6f367a8a1433-tmp\") pod \"storage-provisioner\" (UID: \"88b64cf8-3233-47bb-be31-6f367a8a1433\") " pod="kube-system/storage-provisioner"
	Nov 29 09:02:20 no-preload-924441 kubelet[2148]: I1129 09:02:20.530288    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m46h9\" (UniqueName: \"kubernetes.io/projected/88b64cf8-3233-47bb-be31-6f367a8a1433-kube-api-access-m46h9\") pod \"storage-provisioner\" (UID: \"88b64cf8-3233-47bb-be31-6f367a8a1433\") " pod="kube-system/storage-provisioner"
	Nov 29 09:02:20 no-preload-924441 kubelet[2148]: I1129 09:02:20.530324    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92f6\" (UniqueName: \"kubernetes.io/projected/bf2a8ab9-aaca-4ee6-a390-a02099f693d9-kube-api-access-h92f6\") pod \"coredns-66bc5c9577-nsh8w\" (UID: \"bf2a8ab9-aaca-4ee6-a390-a02099f693d9\") " pod="kube-system/coredns-66bc5c9577-nsh8w"
	Nov 29 09:02:21 no-preload-924441 kubelet[2148]: I1129 09:02:21.890602    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nsh8w" podStartSLOduration=15.890582022 podStartE2EDuration="15.890582022s" podCreationTimestamp="2025-11-29 09:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:21.890580386 +0000 UTC m=+21.174646654" watchObservedRunningTime="2025-11-29 09:02:21.890582022 +0000 UTC m=+21.174648290"
	Nov 29 09:02:21 no-preload-924441 kubelet[2148]: I1129 09:02:21.908640    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.908618766 podStartE2EDuration="15.908618766s" podCreationTimestamp="2025-11-29 09:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:21.9085051 +0000 UTC m=+21.192571368" watchObservedRunningTime="2025-11-29 09:02:21.908618766 +0000 UTC m=+21.192685035"
	Nov 29 09:02:23 no-preload-924441 kubelet[2148]: I1129 09:02:23.848480    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5vqt\" (UniqueName: \"kubernetes.io/projected/26d445de-fc0e-4bc8-adac-935cd86ee75c-kube-api-access-v5vqt\") pod \"busybox\" (UID: \"26d445de-fc0e-4bc8-adac-935cd86ee75c\") " pod="default/busybox"
	Nov 29 09:02:26 no-preload-924441 kubelet[2148]: I1129 09:02:26.909451    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.486034494 podStartE2EDuration="3.909430653s" podCreationTimestamp="2025-11-29 09:02:23 +0000 UTC" firstStartedPulling="2025-11-29 09:02:24.23159866 +0000 UTC m=+23.515664910" lastFinishedPulling="2025-11-29 09:02:26.654994819 +0000 UTC m=+25.939061069" observedRunningTime="2025-11-29 09:02:26.909209395 +0000 UTC m=+26.193275664" watchObservedRunningTime="2025-11-29 09:02:26.909430653 +0000 UTC m=+26.193496921"
	
	
	==> storage-provisioner [07f73647c64253486a8c6bcde1efc5cf43486a9cb6d0209e28918468208ad47c] <==
	I1129 09:02:20.925717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:02:20.934861       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:02:20.934912       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:02:20.937126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:20.942580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:02:20.942795       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:02:20.942990       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-924441_3b51ec5f-33b1-4ec9-b892-014858a7836b!
	I1129 09:02:20.943190       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"86151451-b298-4f83-b326-526915f2b329", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-924441_3b51ec5f-33b1-4ec9-b892-014858a7836b became leader
	W1129 09:02:20.948055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:20.953090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:02:21.044015       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-924441_3b51ec5f-33b1-4ec9-b892-014858a7836b!
	W1129 09:02:22.956833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:22.960625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:24.963399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:24.967130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:26.970962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:26.975411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:28.978148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:28.983442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:30.986756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:30.990592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:32.993859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:32.998486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:35.001496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:35.005052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-924441 -n no-preload-924441
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-924441 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-924441
helpers_test.go:243: (dbg) docker inspect no-preload-924441:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e",
	        "Created": "2025-11-29T09:01:32.925843748Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 495044,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:01:32.964068054Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e/hostname",
	        "HostsPath": "/var/lib/docker/containers/a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e/hosts",
	        "LogPath": "/var/lib/docker/containers/a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e/a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e-json.log",
	        "Name": "/no-preload-924441",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-924441:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-924441",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a046473c1ebd3e2a896b4623ae8e55f92f450aee8768c4e4794475dd0cc24d4e",
	                "LowerDir": "/var/lib/docker/overlay2/bf084be51e328d85d7140d3bad32d403cc9913fc552c9ca7103255f4bb584fbf-init/diff:/var/lib/docker/overlay2/eb180691bce18b8d981b2d61ed0962851c615364ed77c18ff66d559424569005/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf084be51e328d85d7140d3bad32d403cc9913fc552c9ca7103255f4bb584fbf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf084be51e328d85d7140d3bad32d403cc9913fc552c9ca7103255f4bb584fbf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf084be51e328d85d7140d3bad32d403cc9913fc552c9ca7103255f4bb584fbf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-924441",
	                "Source": "/var/lib/docker/volumes/no-preload-924441/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-924441",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-924441",
	                "name.minikube.sigs.k8s.io": "no-preload-924441",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "47b2f0630bf6412a68ffd5a9a49dd44e6a182af0bdc63a26033a455ecf9fea54",
	            "SandboxKey": "/var/run/docker/netns/47b2f0630bf6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-924441": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "01c660269bf53aee934478816016519cb57246f9bdf0fd8776b42bd6fef191ec",
	                    "EndpointID": "ff825fdc88e8e3aa38fffe8f597fbd32723bbbdc953f28e7a6730f82ccf0aad2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "4a:29:88:7e:70:ed",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-924441",
	                        "a046473c1ebd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-924441 -n no-preload-924441
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-924441 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-924441 logs -n 25: (1.103727819s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p cilium-770004 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo containerd config dump                                                                                                                                                                                                        │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ ssh     │ -p cilium-770004 sudo crio config                                                                                                                                                                                                                   │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │                     │
	│ delete  │ -p cilium-770004                                                                                                                                                                                                                                    │ cilium-770004            │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │ 29 Nov 25 09:00 UTC │
	│ start   │ -p force-systemd-env-693869 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-693869 │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │ 29 Nov 25 09:01 UTC │
	│ start   │ -p pause-563162 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:00 UTC │ 29 Nov 25 09:01 UTC │
	│ ssh     │ force-systemd-env-693869 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-693869 │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p force-systemd-env-693869                                                                                                                                                                                                                         │ force-systemd-env-693869 │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ start   │ -p cert-options-536258 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ pause   │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ unpause │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ pause   │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p pause-563162 --alsologtostderr -v=5                                                                                                                                                                                                              │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ ssh     │ cert-options-536258 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ ssh     │ -p cert-options-536258 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p cert-options-536258                                                                                                                                                                                                                              │ cert-options-536258      │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ delete  │ -p pause-563162                                                                                                                                                                                                                                     │ pause-563162             │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:01 UTC │
	│ start   │ -p old-k8s-version-295154 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-295154   │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:02 UTC │
	│ start   │ -p no-preload-924441 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-924441        │ jenkins │ v1.37.0 │ 29 Nov 25 09:01 UTC │ 29 Nov 25 09:02 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-295154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-295154   │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │ 29 Nov 25 09:02 UTC │
	│ stop    │ -p old-k8s-version-295154 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-295154   │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:01:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:01:26.371812  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:26.372231  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:26.372304  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:26.372374  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:26.406988  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:26.407016  460401 cri.go:89] found id: "40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	I1129 09:01:26.407022  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:26.407027  460401 cri.go:89] found id: ""
	I1129 09:01:26.407038  460401 logs.go:282] 3 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:26.407111  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.413707  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.419492  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.424920  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:26.424999  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:26.456369  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:26.456395  460401 cri.go:89] found id: ""
	I1129 09:01:26.456406  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:26.456466  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.462064  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:26.462133  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:26.492837  460401 cri.go:89] found id: ""
	I1129 09:01:26.492868  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.492879  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:26.492887  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:26.492955  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:26.521715  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:26.521747  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:26.521754  460401 cri.go:89] found id: ""
	I1129 09:01:26.521763  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:26.521821  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.526872  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.531295  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:26.531353  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:26.558218  460401 cri.go:89] found id: ""
	I1129 09:01:26.558248  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.558257  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:26.558264  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:26.558313  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:26.587221  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:26.587246  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:26.587253  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:26.587258  460401 cri.go:89] found id: ""
	I1129 09:01:26.587268  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:26.587328  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.591954  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.596055  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:26.600163  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:26.600219  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:26.628586  460401 cri.go:89] found id: ""
	I1129 09:01:26.628613  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.628624  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:26.628633  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:26.628690  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:26.657553  460401 cri.go:89] found id: ""
	I1129 09:01:26.657581  460401 logs.go:282] 0 containers: []
	W1129 09:01:26.657591  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:26.657603  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:26.657622  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:26.721559  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:26.721584  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:26.721601  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:26.756136  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:26.756165  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:26.787789  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:26.787827  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:26.838908  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:26.838943  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:26.875689  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:26.875723  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:26.946907  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:26.946941  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:26.982883  460401 logs.go:123] Gathering logs for kube-apiserver [40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac] ...
	I1129 09:01:26.982919  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	W1129 09:01:27.012923  460401 logs.go:130] failed kube-apiserver [40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac": Process exited with status 1
	stdout:
	
	stderr:
	E1129 09:01:27.010611    2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found" containerID="40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	time="2025-11-29T09:01:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found"
	 output: 
	** stderr ** 
	E1129 09:01:27.010611    2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found" containerID="40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac"
	time="2025-11-29T09:01:27Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"40c6f3e103ae72dbb12c815df4659a1277b1a92060d18c5eb8f7b2d5365f14ac\": not found"
	
	** /stderr **
	I1129 09:01:27.012941  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:27.012953  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:27.051493  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:27.051526  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:27.089722  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:27.089755  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:27.138471  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:27.138504  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:27.172932  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:27.172962  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:27.207844  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:27.207878  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:29.500031  494126 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:01:29.500142  494126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:01:29.500153  494126 out.go:374] Setting ErrFile to fd 2...
	I1129 09:01:29.500159  494126 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:01:29.500372  494126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 09:01:29.500882  494126 out.go:368] Setting JSON to false
	I1129 09:01:29.501996  494126 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6233,"bootTime":1764400656,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:01:29.502070  494126 start.go:143] virtualization: kvm guest
	I1129 09:01:29.506976  494126 out.go:179] * [no-preload-924441] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:01:29.508162  494126 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:01:29.508182  494126 notify.go:221] Checking for updates...
	I1129 09:01:29.510318  494126 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:01:29.511334  494126 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:01:29.516252  494126 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 09:01:29.517321  494126 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:01:29.518374  494126 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:01:29.519877  494126 config.go:182] Loaded profile config "cert-expiration-368536": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:01:29.519989  494126 config.go:182] Loaded profile config "kubernetes-upgrade-806701": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:01:29.520095  494126 config.go:182] Loaded profile config "old-k8s-version-295154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:01:29.520225  494126 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:01:29.546023  494126 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:01:29.546141  494126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:01:29.607775  494126 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:81 SystemTime:2025-11-29 09:01:29.596891851 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:01:29.607908  494126 docker.go:319] overlay module found
	I1129 09:01:29.610288  494126 out.go:179] * Using the docker driver based on user configuration
	I1129 09:01:29.611200  494126 start.go:309] selected driver: docker
	I1129 09:01:29.611220  494126 start.go:927] validating driver "docker" against <nil>
	I1129 09:01:29.611231  494126 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:01:29.611850  494126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:01:29.673266  494126 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:81 SystemTime:2025-11-29 09:01:29.662655452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:01:29.673484  494126 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:01:29.673822  494126 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:01:29.675454  494126 out.go:179] * Using Docker driver with root privileges
	I1129 09:01:29.679127  494126 cni.go:84] Creating CNI manager for ""
	I1129 09:01:29.679243  494126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:29.679264  494126 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:01:29.679351  494126 start.go:353] cluster config:
	{Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:01:29.680591  494126 out.go:179] * Starting "no-preload-924441" primary control-plane node in "no-preload-924441" cluster
	I1129 09:01:29.681517  494126 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:01:29.682533  494126 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:01:29.683845  494126 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:01:29.683975  494126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/config.json ...
	I1129 09:01:29.683971  494126 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:01:29.684042  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/config.json: {Name:mk4df9140f26fdbfe5b2addb71b44607d26b26a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:29.684181  494126 cache.go:107] acquiring lock: {Name:mka90f7eac55a6e5d6d9651fc108f327509b562f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684233  494126 cache.go:107] acquiring lock: {Name:mk2c250a4202b546a18f0cc7664314439a4ec834 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684259  494126 cache.go:107] acquiring lock: {Name:mk976aaa4e01b0c9e83cc6925b8c3c72804bfa25 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684288  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1129 09:01:29.684299  494126 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 144.373µs
	I1129 09:01:29.684315  494126 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1129 09:01:29.684321  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1129 09:01:29.684322  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1129 09:01:29.684332  494126 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 80.37µs
	I1129 09:01:29.684333  494126 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 119.913µs
	I1129 09:01:29.684341  494126 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1129 09:01:29.684344  494126 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1129 09:01:29.684332  494126 cache.go:107] acquiring lock: {Name:mkff44f5b6b961ddaa9acc3e74cf0480b0d2f776 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684358  494126 cache.go:107] acquiring lock: {Name:mk6080f4393a19fb5c4d6f436dce1a2bb1688f86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684378  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1129 09:01:29.684387  494126 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 58.113µs
	I1129 09:01:29.684395  494126 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1129 09:01:29.684399  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1129 09:01:29.684282  494126 cache.go:107] acquiring lock: {Name:mkb8e7a67c98a0b8caa208116d415323f5ca7ccc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684410  494126 cache.go:107] acquiring lock: {Name:mk47ee24ca074cb6cc1a641d737215686b099dc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684472  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1129 09:01:29.684482  494126 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 217.393µs
	I1129 09:01:29.684492  494126 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1129 09:01:29.684416  494126 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 61.464µs
	I1129 09:01:29.684504  494126 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1129 09:01:29.684517  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1129 09:01:29.684533  494126 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 171.692µs
	I1129 09:01:29.684552  494126 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1129 09:01:29.684643  494126 cache.go:107] acquiring lock: {Name:mk912246de843459c104f342794e23ecb1fc7a75 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.684790  494126 cache.go:115] /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1129 09:01:29.684806  494126 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 226.111µs
	I1129 09:01:29.684824  494126 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1129 09:01:29.684840  494126 cache.go:87] Successfully saved all images to host disk.
	I1129 09:01:29.706829  494126 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:01:29.706854  494126 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:01:29.706878  494126 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:01:29.706918  494126 start.go:360] acquireMachinesLock for no-preload-924441: {Name:mkf9f3b6b30f178cf9b9d50a2dabce8e2c5d48f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:01:29.707056  494126 start.go:364] duration metric: took 99.455µs to acquireMachinesLock for "no-preload-924441"
	I1129 09:01:29.707090  494126 start.go:93] Provisioning new machine with config: &{Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:01:29.707206  494126 start.go:125] createHost starting for "" (driver="docker")
	I1129 09:01:28.461537  493486 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:01:28.461867  493486 start.go:159] libmachine.API.Create for "old-k8s-version-295154" (driver="docker")
	I1129 09:01:28.461917  493486 client.go:173] LocalClient.Create starting
	I1129 09:01:28.462009  493486 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem
	I1129 09:01:28.462065  493486 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:28.462089  493486 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:28.462160  493486 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem
	I1129 09:01:28.462186  493486 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:28.462205  493486 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:28.462679  493486 cli_runner.go:164] Run: docker network inspect old-k8s-version-295154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:01:28.481658  493486 cli_runner.go:211] docker network inspect old-k8s-version-295154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:01:28.481745  493486 network_create.go:284] running [docker network inspect old-k8s-version-295154] to gather additional debugging logs...
	I1129 09:01:28.481770  493486 cli_runner.go:164] Run: docker network inspect old-k8s-version-295154
	W1129 09:01:28.500619  493486 cli_runner.go:211] docker network inspect old-k8s-version-295154 returned with exit code 1
	I1129 09:01:28.500661  493486 network_create.go:287] error running [docker network inspect old-k8s-version-295154]: docker network inspect old-k8s-version-295154: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-295154 not found
	I1129 09:01:28.500677  493486 network_create.go:289] output of [docker network inspect old-k8s-version-295154]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-295154 not found
	
	** /stderr **
	I1129 09:01:28.500849  493486 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:28.518426  493486 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f69c672bf913 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:40:f4:ed:4f:ab} reservation:<nil>}
	I1129 09:01:28.519384  493486 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-96d20aff5877 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:01:e2:a3:b8:33} reservation:<nil>}
	I1129 09:01:28.520407  493486 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7906c56f869 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:29:75:e3:e0:7f} reservation:<nil>}
	I1129 09:01:28.521974  493486 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f90700}
	I1129 09:01:28.522028  493486 network_create.go:124] attempt to create docker network old-k8s-version-295154 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1129 09:01:28.522109  493486 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-295154 old-k8s-version-295154
	I1129 09:01:28.575478  493486 network_create.go:108] docker network old-k8s-version-295154 192.168.76.0/24 created
	I1129 09:01:28.575522  493486 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-295154" container
	I1129 09:01:28.575603  493486 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:01:28.593666  493486 cli_runner.go:164] Run: docker volume create old-k8s-version-295154 --label name.minikube.sigs.k8s.io=old-k8s-version-295154 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:01:28.612389  493486 oci.go:103] Successfully created a docker volume old-k8s-version-295154
	I1129 09:01:28.612501  493486 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-295154-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-295154 --entrypoint /usr/bin/test -v old-k8s-version-295154:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:01:29.238109  493486 oci.go:107] Successfully prepared a docker volume old-k8s-version-295154
	I1129 09:01:29.238162  493486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 09:01:29.238176  493486 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:01:29.238241  493486 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-295154:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 09:01:32.586626  493486 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-295154:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.348341473s)
	I1129 09:01:32.586660  493486 kic.go:203] duration metric: took 3.348481997s to extract preloaded images to volume ...
	W1129 09:01:32.586761  493486 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:01:32.586805  493486 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:01:32.586861  493486 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:01:32.650922  493486 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-295154 --name old-k8s-version-295154 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-295154 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-295154 --network old-k8s-version-295154 --ip 192.168.76.2 --volume old-k8s-version-295154:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:01:32.982372  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Running}}
	I1129 09:01:33.001073  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:33.021021  493486 cli_runner.go:164] Run: docker exec old-k8s-version-295154 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:01:33.078706  493486 oci.go:144] the created container "old-k8s-version-295154" has a running status.
	I1129 09:01:33.078890  493486 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa...
	I1129 09:01:33.213970  493486 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:01:33.251103  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:29.709142  494126 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:01:29.709367  494126 start.go:159] libmachine.API.Create for "no-preload-924441" (driver="docker")
	I1129 09:01:29.709398  494126 client.go:173] LocalClient.Create starting
	I1129 09:01:29.709475  494126 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem
	I1129 09:01:29.709526  494126 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:29.709553  494126 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:29.709629  494126 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem
	I1129 09:01:29.709661  494126 main.go:143] libmachine: Decoding PEM data...
	I1129 09:01:29.709679  494126 main.go:143] libmachine: Parsing certificate...
	I1129 09:01:29.710082  494126 cli_runner.go:164] Run: docker network inspect no-preload-924441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:01:29.727862  494126 cli_runner.go:211] docker network inspect no-preload-924441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:01:29.727982  494126 network_create.go:284] running [docker network inspect no-preload-924441] to gather additional debugging logs...
	I1129 09:01:29.728011  494126 cli_runner.go:164] Run: docker network inspect no-preload-924441
	W1129 09:01:29.747053  494126 cli_runner.go:211] docker network inspect no-preload-924441 returned with exit code 1
	I1129 09:01:29.747092  494126 network_create.go:287] error running [docker network inspect no-preload-924441]: docker network inspect no-preload-924441: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-924441 not found
	I1129 09:01:29.747129  494126 network_create.go:289] output of [docker network inspect no-preload-924441]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-924441 not found
	
	** /stderr **
	I1129 09:01:29.747297  494126 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:29.769138  494126 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f69c672bf913 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:40:f4:ed:4f:ab} reservation:<nil>}
	I1129 09:01:29.769961  494126 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-96d20aff5877 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:01:e2:a3:b8:33} reservation:<nil>}
	I1129 09:01:29.770795  494126 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7906c56f869 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:29:75:e3:e0:7f} reservation:<nil>}
	I1129 09:01:29.771440  494126 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-aea341d97cf5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ea:fb:22:ff:e0:65} reservation:<nil>}
	I1129 09:01:29.771972  494126 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5ec7c7346e1b IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:f6:a5:df:dd:c8:cf} reservation:<nil>}
	I1129 09:01:29.772536  494126 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-ede9a8c5c6b0 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:6e:06:75:02:7a} reservation:<nil>}
	I1129 09:01:29.773382  494126 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00201aa40}
	I1129 09:01:29.773412  494126 network_create.go:124] attempt to create docker network no-preload-924441 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1129 09:01:29.773492  494126 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-924441 no-preload-924441
	I1129 09:01:29.826699  494126 network_create.go:108] docker network no-preload-924441 192.168.103.0/24 created
	I1129 09:01:29.826822  494126 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-924441" container
	I1129 09:01:29.826907  494126 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:01:29.848520  494126 cli_runner.go:164] Run: docker volume create no-preload-924441 --label name.minikube.sigs.k8s.io=no-preload-924441 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:01:29.870388  494126 oci.go:103] Successfully created a docker volume no-preload-924441
	I1129 09:01:29.870496  494126 cli_runner.go:164] Run: docker run --rm --name no-preload-924441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-924441 --entrypoint /usr/bin/test -v no-preload-924441:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:01:32.848045  494126 cli_runner.go:217] Completed: docker run --rm --name no-preload-924441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-924441 --entrypoint /usr/bin/test -v no-preload-924441:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (2.977502795s)
	I1129 09:01:32.848077  494126 oci.go:107] Successfully prepared a docker volume no-preload-924441
	I1129 09:01:32.848131  494126 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	W1129 09:01:32.848227  494126 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:01:32.848271  494126 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:01:32.848312  494126 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:01:32.909124  494126 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-924441 --name no-preload-924441 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-924441 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-924441 --network no-preload-924441 --ip 192.168.103.2 --volume no-preload-924441:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:01:33.229639  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Running}}
	I1129 09:01:33.257967  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:01:33.283525  494126 cli_runner.go:164] Run: docker exec no-preload-924441 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:01:33.358911  494126 oci.go:144] the created container "no-preload-924441" has a running status.
	I1129 09:01:33.358964  494126 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa...
	I1129 09:01:33.456248  494126 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:01:33.491041  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:01:33.515555  494126 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:01:33.515581  494126 kic_runner.go:114] Args: [docker exec --privileged no-preload-924441 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:01:33.567971  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:01:33.599907  494126 machine.go:94] provisionDockerMachine start ...
	I1129 09:01:33.599999  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:33.634873  494126 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.635521  494126 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:01:33.635590  494126 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:01:33.636667  494126 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34766->127.0.0.1:33063: read: connection reset by peer
	I1129 09:01:29.724136  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:29.724608  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:29.724657  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:29.724702  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:29.763194  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:29.763266  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:29.763286  460401 cri.go:89] found id: ""
	I1129 09:01:29.763304  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:29.763372  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.769877  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.774814  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:29.774887  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:29.810078  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:29.810105  460401 cri.go:89] found id: ""
	I1129 09:01:29.810116  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:29.810167  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.815272  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:29.815349  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:29.851653  460401 cri.go:89] found id: ""
	I1129 09:01:29.851680  460401 logs.go:282] 0 containers: []
	W1129 09:01:29.851691  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:29.851700  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:29.851773  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:29.883424  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:29.883449  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:29.883456  460401 cri.go:89] found id: ""
	I1129 09:01:29.883466  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:29.883537  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.889105  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.894072  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:29.894150  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:29.924971  460401 cri.go:89] found id: ""
	I1129 09:01:29.925006  460401 logs.go:282] 0 containers: []
	W1129 09:01:29.925019  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:29.925027  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:29.925129  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:29.954168  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:29.954194  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:29.954199  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:29.954203  460401 cri.go:89] found id: ""
	I1129 09:01:29.954214  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:29.954278  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.959542  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.964240  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:29.968754  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:29.968820  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:29.999663  460401 cri.go:89] found id: ""
	I1129 09:01:29.999685  460401 logs.go:282] 0 containers: []
	W1129 09:01:29.999694  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:29.999700  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:29.999780  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:30.029803  460401 cri.go:89] found id: ""
	I1129 09:01:30.029833  460401 logs.go:282] 0 containers: []
	W1129 09:01:30.029845  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:30.029859  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:30.029877  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:30.069873  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:30.069904  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:30.108923  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:30.108958  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:30.146649  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:30.146682  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:30.190480  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:30.190514  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:30.225134  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:30.225167  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:30.299416  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:30.299461  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:30.314711  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:30.314766  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:30.384833  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:30.384856  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:30.384879  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:30.420690  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:30.420720  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:30.476182  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:30.476221  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:30.507666  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:30.507698  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:30.536613  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:30.536640  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:33.076844  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:33.077304  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:33.077371  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:33.077426  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:33.111899  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:33.111922  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:33.111928  460401 cri.go:89] found id: ""
	I1129 09:01:33.111938  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:33.111995  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.117191  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.122615  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:33.122688  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:33.163794  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:33.163822  460401 cri.go:89] found id: ""
	I1129 09:01:33.163834  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:33.163897  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.170244  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:33.170334  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:33.203629  460401 cri.go:89] found id: ""
	I1129 09:01:33.203662  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.203675  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:33.203683  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:33.203759  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:33.248112  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:33.248142  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:33.248148  460401 cri.go:89] found id: ""
	I1129 09:01:33.248159  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:33.248226  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.255192  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.262339  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:33.262419  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:33.308727  460401 cri.go:89] found id: ""
	I1129 09:01:33.308855  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.308869  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:33.308878  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:33.309309  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:33.361181  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:33.361234  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:33.361241  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:33.361245  460401 cri.go:89] found id: ""
	I1129 09:01:33.361255  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:33.361343  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.368091  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.374495  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:33.380899  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:33.380965  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:33.430643  460401 cri.go:89] found id: ""
	I1129 09:01:33.430670  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.430681  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:33.430689  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:33.430771  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:33.467019  460401 cri.go:89] found id: ""
	I1129 09:01:33.467047  460401 logs.go:282] 0 containers: []
	W1129 09:01:33.467058  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:33.467072  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:33.467091  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:33.529538  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:33.529588  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:33.591866  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:33.591912  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:33.664144  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:33.664179  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:33.701152  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:33.701195  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:33.735624  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:33.735669  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:33.774144  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:33.774175  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:33.808426  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:33.808461  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:33.898471  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:33.898509  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:33.914358  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:33.914394  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:33.978927  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:33.978954  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:33.978975  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:34.016239  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:34.016268  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:34.055208  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:34.055239  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:33.275806  493486 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:01:33.275832  493486 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-295154 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:01:33.349350  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:33.378383  493486 machine.go:94] provisionDockerMachine start ...
	I1129 09:01:33.378475  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:33.410015  493486 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.410367  493486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:01:33.410384  493486 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:01:33.577990  493486 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-295154
	
	I1129 09:01:33.578018  493486 ubuntu.go:182] provisioning hostname "old-k8s-version-295154"
	I1129 09:01:33.578086  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:33.609401  493486 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.609890  493486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:01:33.609953  493486 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-295154 && echo "old-k8s-version-295154" | sudo tee /etc/hostname
	I1129 09:01:33.789112  493486 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-295154
	
	I1129 09:01:33.789205  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:33.813423  493486 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:33.813741  493486 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1129 09:01:33.813774  493486 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-295154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-295154/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-295154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:01:33.966671  493486 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:01:33.966701  493486 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-255825/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-255825/.minikube}
	I1129 09:01:33.966720  493486 ubuntu.go:190] setting up certificates
	I1129 09:01:33.966746  493486 provision.go:84] configureAuth start
	I1129 09:01:33.966809  493486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-295154
	I1129 09:01:33.987509  493486 provision.go:143] copyHostCerts
	I1129 09:01:33.987591  493486 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem, removing ...
	I1129 09:01:33.987609  493486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem
	I1129 09:01:33.987703  493486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem (1078 bytes)
	I1129 09:01:33.987854  493486 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem, removing ...
	I1129 09:01:33.987873  493486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem
	I1129 09:01:33.987926  493486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem (1123 bytes)
	I1129 09:01:33.988030  493486 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem, removing ...
	I1129 09:01:33.988043  493486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem
	I1129 09:01:33.988093  493486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem (1679 bytes)
	I1129 09:01:33.988197  493486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-295154 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-295154]
	I1129 09:01:34.173289  493486 provision.go:177] copyRemoteCerts
	I1129 09:01:34.173365  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:01:34.173409  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.192053  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.294293  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:01:34.313898  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1129 09:01:34.331337  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:01:34.348272  493486 provision.go:87] duration metric: took 381.510752ms to configureAuth
	I1129 09:01:34.348301  493486 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:01:34.348457  493486 config.go:182] Loaded profile config "old-k8s-version-295154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:01:34.348472  493486 machine.go:97] duration metric: took 970.068662ms to provisionDockerMachine
	I1129 09:01:34.348481  493486 client.go:176] duration metric: took 5.886553133s to LocalClient.Create
	I1129 09:01:34.348502  493486 start.go:167] duration metric: took 5.88663904s to libmachine.API.Create "old-k8s-version-295154"
	I1129 09:01:34.348512  493486 start.go:293] postStartSetup for "old-k8s-version-295154" (driver="docker")
	I1129 09:01:34.348520  493486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:01:34.348570  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:01:34.348614  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.366501  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.469910  493486 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:01:34.473823  493486 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:01:34.473855  493486 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:01:34.473868  493486 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/addons for local assets ...
	I1129 09:01:34.473922  493486 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/files for local assets ...
	I1129 09:01:34.474038  493486 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem -> 2594832.pem in /etc/ssl/certs
	I1129 09:01:34.474177  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:01:34.481912  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:34.502433  493486 start.go:296] duration metric: took 153.905912ms for postStartSetup
	I1129 09:01:34.502813  493486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-295154
	I1129 09:01:34.520071  493486 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/config.json ...
	I1129 09:01:34.520308  493486 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:01:34.520347  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.539111  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.640199  493486 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:01:34.644901  493486 start.go:128] duration metric: took 6.185289215s to createHost
	I1129 09:01:34.644928  493486 start.go:83] releasing machines lock for "old-k8s-version-295154", held for 6.185484113s
	I1129 09:01:34.644991  493486 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-295154
	I1129 09:01:34.662525  493486 ssh_runner.go:195] Run: cat /version.json
	I1129 09:01:34.662583  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.662584  493486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:01:34.662648  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:34.679837  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.681115  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:34.833568  493486 ssh_runner.go:195] Run: systemctl --version
	I1129 09:01:34.840355  493486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:01:34.844844  493486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:01:34.844907  493486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:01:34.869137  493486 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:01:34.869161  493486 start.go:496] detecting cgroup driver to use...
	I1129 09:01:34.869194  493486 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:01:34.869251  493486 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:01:34.883461  493486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:01:34.895885  493486 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:01:34.895942  493486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:01:34.912002  493486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:01:34.929350  493486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:01:35.015369  493486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:01:35.101537  493486 docker.go:234] disabling docker service ...
	I1129 09:01:35.101597  493486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:01:35.120759  493486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:01:35.133226  493486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:01:35.217122  493486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:01:35.301702  493486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:01:35.314440  493486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:01:35.328312  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1129 09:01:35.338331  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:01:35.346975  493486 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1129 09:01:35.347033  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1129 09:01:35.355511  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:35.363986  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:01:35.372342  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:35.380589  493486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:01:35.388205  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:01:35.396344  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:01:35.404459  493486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:01:35.412783  493486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:01:35.420177  493486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:01:35.427378  493486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:35.508150  493486 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:01:35.605801  493486 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:01:35.605868  493486 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:01:35.610095  493486 start.go:564] Will wait 60s for crictl version
	I1129 09:01:35.610140  493486 ssh_runner.go:195] Run: which crictl
	I1129 09:01:35.613826  493486 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:01:35.640869  493486 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:01:35.640947  493486 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:35.662573  493486 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:35.686990  493486 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1129 09:01:35.688126  493486 cli_runner.go:164] Run: docker network inspect old-k8s-version-295154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:35.705269  493486 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1129 09:01:35.709565  493486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:35.720029  493486 kubeadm.go:884] updating cluster {Name:old-k8s-version-295154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-295154 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:01:35.720146  493486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 09:01:35.720192  493486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:01:35.745337  493486 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:01:35.745359  493486 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:01:35.745433  493486 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:01:35.768552  493486 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:01:35.768573  493486 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:01:35.768582  493486 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 containerd true true} ...
	I1129 09:01:35.768708  493486 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-295154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-295154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:01:35.768800  493486 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:01:35.793684  493486 cni.go:84] Creating CNI manager for ""
	I1129 09:01:35.793704  493486 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:35.793722  493486 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:01:35.793760  493486 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-295154 NodeName:old-k8s-version-295154 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:01:35.793881  493486 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-295154"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:01:35.793941  493486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1129 09:01:35.801702  493486 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:01:35.801779  493486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:01:35.809370  493486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1129 09:01:35.821645  493486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:01:35.837123  493486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1129 09:01:35.849282  493486 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:01:35.852777  493486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:35.862291  493486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:35.945522  493486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:01:35.967020  493486 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154 for IP: 192.168.76.2
	I1129 09:01:35.967046  493486 certs.go:195] generating shared ca certs ...
	I1129 09:01:35.967066  493486 certs.go:227] acquiring lock for ca certs: {Name:mk5e6bcae0a6944966b241f3c6197a472703c991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:35.967208  493486 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key
	I1129 09:01:35.967259  493486 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key
	I1129 09:01:35.967269  493486 certs.go:257] generating profile certs ...
	I1129 09:01:35.967334  493486 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.key
	I1129 09:01:35.967347  493486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt with IP's: []
	I1129 09:01:36.097254  493486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt ...
	I1129 09:01:36.097290  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: {Name:mk21cfae97f1407d02cd99fe2a74be759b699397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.097496  493486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.key ...
	I1129 09:01:36.097514  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.key: {Name:mk0736bb845004e9c4d4a2d8602930ec0568eec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.097631  493486 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72
	I1129 09:01:36.097693  493486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1129 09:01:36.144552  493486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72 ...
	I1129 09:01:36.144579  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72: {Name:mk3fedcec97acb487835213600ee8b696c362f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.144774  493486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72 ...
	I1129 09:01:36.144793  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72: {Name:mk9dc52d2daf1391895a4ee3c561f559be0e2755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.144904  493486 certs.go:382] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt.a040bf72 -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt
	I1129 09:01:36.145012  493486 certs.go:386] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key.a040bf72 -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key
	I1129 09:01:36.145117  493486 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key
	I1129 09:01:36.145138  493486 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt with IP's: []
	I1129 09:01:36.307914  493486 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt ...
	I1129 09:01:36.307946  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt: {Name:mk698ad1b9e2e29d385fd97b123d5b48273c6d5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.308144  493486 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key ...
	I1129 09:01:36.308172  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key: {Name:mkcfd3db96260b6b8677060f32dcbd4dd8f838bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:36.308432  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem (1338 bytes)
	W1129 09:01:36.308490  493486 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483_empty.pem, impossibly tiny 0 bytes
	I1129 09:01:36.308506  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:01:36.308543  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:01:36.308590  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:01:36.308633  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem (1679 bytes)
	I1129 09:01:36.308689  493486 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:36.309360  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:01:36.328372  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:01:36.345872  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:01:36.363285  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:01:36.380427  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1129 09:01:36.397563  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:01:36.414929  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:01:36.432334  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:01:36.449233  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /usr/share/ca-certificates/2594832.pem (1708 bytes)
	I1129 09:01:36.469085  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:01:36.485869  493486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem --> /usr/share/ca-certificates/259483.pem (1338 bytes)
	I1129 09:01:36.502784  493486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:01:36.515208  493486 ssh_runner.go:195] Run: openssl version
	I1129 09:01:36.521390  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:01:36.529514  493486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:36.533021  493486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:36.533062  493486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:36.567579  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:01:36.576162  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259483.pem && ln -fs /usr/share/ca-certificates/259483.pem /etc/ssl/certs/259483.pem"
	I1129 09:01:36.584343  493486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259483.pem
	I1129 09:01:36.588122  493486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:35 /usr/share/ca-certificates/259483.pem
	I1129 09:01:36.588176  493486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259483.pem
	I1129 09:01:36.626659  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259483.pem /etc/ssl/certs/51391683.0"
	I1129 09:01:36.635780  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2594832.pem && ln -fs /usr/share/ca-certificates/2594832.pem /etc/ssl/certs/2594832.pem"
	I1129 09:01:36.644862  493486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2594832.pem
	I1129 09:01:36.648851  493486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:35 /usr/share/ca-certificates/2594832.pem
	I1129 09:01:36.648906  493486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2594832.pem
	I1129 09:01:36.691340  493486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2594832.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:01:36.701173  493486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:01:36.705050  493486 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:01:36.705110  493486 kubeadm.go:401] StartCluster: {Name:old-k8s-version-295154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-295154 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:01:36.705201  493486 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:01:36.705272  493486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:01:36.734535  493486 cri.go:89] found id: ""
	I1129 09:01:36.734592  493486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:01:36.743400  493486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:01:36.751273  493486 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:01:36.751332  493486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:01:36.760386  493486 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:01:36.760404  493486 kubeadm.go:158] found existing configuration files:
	
	I1129 09:01:36.760450  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:01:36.768796  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:01:36.768854  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:01:36.776326  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:01:36.784663  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:01:36.784720  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:01:36.793650  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:01:36.801817  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:01:36.801887  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:01:36.811081  493486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:01:36.819075  493486 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:01:36.819130  493486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:01:36.827369  493486 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:01:36.885752  493486 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1129 09:01:36.885824  493486 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:01:36.932588  493486 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:01:36.932993  493486 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:01:36.933139  493486 kubeadm.go:319] OS: Linux
	I1129 09:01:36.933232  493486 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:01:36.933332  493486 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:01:36.933468  493486 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:01:36.933539  493486 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:01:36.933597  493486 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:01:36.933656  493486 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:01:36.933717  493486 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:01:36.933794  493486 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:01:37.018039  493486 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:01:37.018169  493486 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:01:37.018319  493486 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1129 09:01:37.171075  493486 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:01:37.173428  493486 out.go:252]   - Generating certificates and keys ...
	I1129 09:01:37.173535  493486 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:01:37.173613  493486 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:01:37.301964  493486 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:01:37.410711  493486 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:01:37.550821  493486 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:01:37.787553  493486 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:01:37.889172  493486 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:01:37.889414  493486 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-295154] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:01:38.063017  493486 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:01:38.063214  493486 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-295154] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1129 09:01:38.202234  493486 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:01:38.262563  493486 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:01:36.787780  494126 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-924441
	
	I1129 09:01:36.787807  494126 ubuntu.go:182] provisioning hostname "no-preload-924441"
	I1129 09:01:36.787868  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:36.808836  494126 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:36.809153  494126 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:01:36.809173  494126 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-924441 && echo "no-preload-924441" | sudo tee /etc/hostname
	I1129 09:01:36.973090  494126 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-924441
	
	I1129 09:01:36.973172  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:36.993095  494126 main.go:143] libmachine: Using SSH client type: native
	I1129 09:01:36.993348  494126 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1129 09:01:36.993366  494126 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-924441' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-924441/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-924441' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:01:37.147252  494126 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:01:37.147286  494126 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-255825/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-255825/.minikube}
	I1129 09:01:37.147336  494126 ubuntu.go:190] setting up certificates
	I1129 09:01:37.147350  494126 provision.go:84] configureAuth start
	I1129 09:01:37.147407  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-924441
	I1129 09:01:37.167771  494126 provision.go:143] copyHostCerts
	I1129 09:01:37.167841  494126 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem, removing ...
	I1129 09:01:37.167856  494126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem
	I1129 09:01:37.167941  494126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem (1078 bytes)
	I1129 09:01:37.168073  494126 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem, removing ...
	I1129 09:01:37.168087  494126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem
	I1129 09:01:37.168135  494126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem (1123 bytes)
	I1129 09:01:37.168246  494126 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem, removing ...
	I1129 09:01:37.168259  494126 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem
	I1129 09:01:37.168304  494126 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem (1679 bytes)
	I1129 09:01:37.168383  494126 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem org=jenkins.no-preload-924441 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-924441]
	I1129 09:01:37.302569  494126 provision.go:177] copyRemoteCerts
	I1129 09:01:37.302625  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:01:37.302676  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.320965  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.425520  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:01:37.446589  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:01:37.463963  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:01:37.480486  494126 provision.go:87] duration metric: took 333.119398ms to configureAuth
	I1129 09:01:37.480511  494126 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:01:37.480667  494126 config.go:182] Loaded profile config "no-preload-924441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:01:37.480680  494126 machine.go:97] duration metric: took 3.880753165s to provisionDockerMachine
	I1129 09:01:37.480691  494126 client.go:176] duration metric: took 7.771282469s to LocalClient.Create
	I1129 09:01:37.480714  494126 start.go:167] duration metric: took 7.771346771s to libmachine.API.Create "no-preload-924441"
	I1129 09:01:37.480726  494126 start.go:293] postStartSetup for "no-preload-924441" (driver="docker")
	I1129 09:01:37.480750  494126 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:01:37.480814  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:01:37.480883  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.498996  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.602864  494126 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:01:37.606394  494126 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:01:37.606428  494126 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:01:37.606439  494126 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/addons for local assets ...
	I1129 09:01:37.606502  494126 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/files for local assets ...
	I1129 09:01:37.606593  494126 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem -> 2594832.pem in /etc/ssl/certs
	I1129 09:01:37.606724  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:01:37.614670  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:37.635134  494126 start.go:296] duration metric: took 154.380805ms for postStartSetup
	I1129 09:01:37.635554  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-924441
	I1129 09:01:37.655528  494126 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/config.json ...
	I1129 09:01:37.655850  494126 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:01:37.655900  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.677317  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.781275  494126 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:01:37.786042  494126 start.go:128] duration metric: took 8.07881841s to createHost
	I1129 09:01:37.786069  494126 start.go:83] releasing machines lock for "no-preload-924441", held for 8.078998368s
	I1129 09:01:37.786141  494126 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-924441
	I1129 09:01:37.805459  494126 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:01:37.805494  494126 ssh_runner.go:195] Run: cat /version.json
	I1129 09:01:37.805552  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.805561  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:01:37.824515  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.825042  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:01:37.978797  494126 ssh_runner.go:195] Run: systemctl --version
	I1129 09:01:37.985561  494126 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:01:37.990121  494126 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:01:37.990198  494126 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:01:38.014806  494126 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:01:38.014833  494126 start.go:496] detecting cgroup driver to use...
	I1129 09:01:38.014872  494126 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:01:38.014922  494126 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:01:38.028890  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:01:38.040635  494126 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:01:38.040704  494126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:01:38.059274  494126 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:01:38.079903  494126 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:01:38.160895  494126 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:01:38.248638  494126 docker.go:234] disabling docker service ...
	I1129 09:01:38.248693  494126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:01:38.270699  494126 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:01:38.283241  494126 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:01:38.364018  494126 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:01:38.451578  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:01:38.464900  494126 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:01:38.478711  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:01:38.488688  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:01:38.497188  494126 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1129 09:01:38.497235  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1129 09:01:38.506143  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:38.514500  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:01:38.522578  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:01:38.530605  494126 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:01:38.538074  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:01:38.546395  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:01:38.554633  494126 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:01:38.564192  494126 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:01:38.571328  494126 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:01:38.578488  494126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:38.657072  494126 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:01:38.731899  494126 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:01:38.731970  494126 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:01:38.736165  494126 start.go:564] Will wait 60s for crictl version
	I1129 09:01:38.736223  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:38.739821  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:01:38.765727  494126 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:01:38.765799  494126 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:38.788554  494126 ssh_runner.go:195] Run: containerd --version
	I1129 09:01:38.813801  494126 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:01:38.554215  493486 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:01:38.554337  493486 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:01:38.871587  493486 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:01:39.076048  493486 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:01:39.365556  493486 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:01:39.428949  493486 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:01:39.429579  493486 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:01:39.438444  493486 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:01:38.814940  494126 cli_runner.go:164] Run: docker network inspect no-preload-924441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:01:38.832444  494126 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1129 09:01:38.836556  494126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:38.846826  494126 kubeadm.go:884] updating cluster {Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:01:38.846940  494126 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:01:38.846988  494126 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:01:38.875513  494126 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1129 09:01:38.875537  494126 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1129 09:01:38.875606  494126 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:38.875606  494126 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:38.875633  494126 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:38.875642  494126 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:38.875663  494126 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:38.875672  494126 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:38.875613  494126 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1129 09:01:38.875710  494126 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:38.877065  494126 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:38.877082  494126 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:38.877098  494126 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:38.877104  494126 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:38.877132  494126 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:38.877185  494126 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:38.877233  494126 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:38.877189  494126 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1129 09:01:39.045541  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.34.1" and sha "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813"
	I1129 09:01:39.045605  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.049466  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.34.1" and sha "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f"
	I1129 09:01:39.049525  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.055696  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.34.1" and sha "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97"
	I1129 09:01:39.055787  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.065913  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.6.4-0" and sha "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115"
	I1129 09:01:39.065987  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.071326  494126 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1129 09:01:39.071386  494126 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.071433  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.072494  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.12.1" and sha "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969"
	I1129 09:01:39.072560  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.074055  494126 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1129 09:01:39.074103  494126 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.074155  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.079805  494126 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1129 09:01:39.079853  494126 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.079906  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.090225  494126 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1129 09:01:39.090271  494126 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.090279  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.090318  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.094954  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.34.1" and sha "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7"
	I1129 09:01:39.095016  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.096356  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.096365  494126 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1129 09:01:39.096402  494126 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.096438  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.096440  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.108053  494126 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10.1" and sha "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
	I1129 09:01:39.108111  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10.1
	I1129 09:01:39.125198  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.125300  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.125361  494126 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1129 09:01:39.125408  494126 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.125455  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.128374  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.132565  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.132640  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.138113  494126 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1129 09:01:39.138163  494126 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1129 09:01:39.138200  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.167013  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1129 09:01:39.167128  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.167330  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1129 09:01:39.167330  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.167996  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1129 09:01:39.173113  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.173171  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:01:39.214078  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1129 09:01:39.214193  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:01:39.214389  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.214576  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1129 09:01:39.220552  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1129 09:01:39.220649  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:01:39.220857  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1129 09:01:39.220895  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1129 09:01:39.222433  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1129 09:01:39.222493  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:01:39.222587  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1129 09:01:39.222669  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:01:39.275608  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1129 09:01:39.275622  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1129 09:01:39.275679  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1129 09:01:39.275707  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1129 09:01:39.275716  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:01:39.287672  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1129 09:01:39.287708  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1129 09:01:39.287708  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1129 09:01:39.287808  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1129 09:01:39.287825  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:01:39.339051  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1129 09:01:39.339082  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1129 09:01:39.339092  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1129 09:01:39.339110  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1129 09:01:39.339137  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1129 09:01:39.339173  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:01:39.339202  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1129 09:01:39.339317  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1129 09:01:39.424948  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1129 09:01:39.424997  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1129 09:01:39.425030  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1129 09:01:39.425058  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1129 09:01:36.592807  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:36.593240  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:36.593304  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:36.593360  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:36.620981  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:36.621002  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:36.621008  460401 cri.go:89] found id: ""
	I1129 09:01:36.621018  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:36.621079  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.627593  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.632350  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:36.632420  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:36.660070  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:36.660091  460401 cri.go:89] found id: ""
	I1129 09:01:36.660100  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:36.660156  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.664644  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:36.664720  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:36.696935  460401 cri.go:89] found id: ""
	I1129 09:01:36.696967  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.696977  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:36.696985  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:36.697045  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:36.726832  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:36.726857  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:36.726864  460401 cri.go:89] found id: ""
	I1129 09:01:36.726874  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:36.726928  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.732693  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.737783  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:36.737848  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:36.765201  460401 cri.go:89] found id: ""
	I1129 09:01:36.765229  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.765238  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:36.765245  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:36.765300  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:36.795203  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:36.795231  460401 cri.go:89] found id: "f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:36.795237  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:36.795242  460401 cri.go:89] found id: ""
	I1129 09:01:36.795251  460401 logs.go:282] 3 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:36.795316  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.801008  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.806325  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:36.811017  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:36.811088  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:36.840359  460401 cri.go:89] found id: ""
	I1129 09:01:36.840386  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.840397  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:36.840406  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:36.840469  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:36.874045  460401 cri.go:89] found id: ""
	I1129 09:01:36.874068  460401 logs.go:282] 0 containers: []
	W1129 09:01:36.874075  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:36.874085  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:36.874099  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:36.950404  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:36.950426  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:36.950442  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:36.994232  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:36.994264  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:37.049507  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:37.049546  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:37.087133  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:37.087165  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:37.117577  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:37.117602  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:37.154176  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:37.154210  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:37.197090  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:37.197121  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:37.240775  460401 logs.go:123] Gathering logs for kube-controller-manager [f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00] ...
	I1129 09:01:37.240811  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f78d0d97ffa9f2d0cbf8a0cf305a7f0c4323a505bb9b3fa272405c6b22ab9f00"
	I1129 09:01:37.269234  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:37.269260  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:37.312948  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:37.312979  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:37.348500  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:37.348527  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:37.435755  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:37.435786  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:39.440026  493486 out.go:252]   - Booting up control plane ...
	I1129 09:01:39.440161  493486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:01:39.440285  493486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:01:39.440970  493486 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:01:39.459308  493486 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:01:39.460971  493486 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:01:39.461057  493486 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:01:39.610284  493486 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1129 09:01:39.952440  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:39.952996  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:39.953076  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:39.953145  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:39.990073  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:39.990100  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:39.990107  460401 cri.go:89] found id: ""
	I1129 09:01:39.990117  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:39.990183  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:39.996871  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.002374  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:40.002458  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:40.036502  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:40.036525  460401 cri.go:89] found id: ""
	I1129 09:01:40.036542  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:40.036600  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.044171  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:40.044261  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:40.084048  460401 cri.go:89] found id: ""
	I1129 09:01:40.084165  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.084184  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:40.084195  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:40.084329  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:40.116869  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:40.116899  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:40.116905  460401 cri.go:89] found id: ""
	I1129 09:01:40.116916  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:40.116982  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.123222  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.128079  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:40.128146  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:40.159071  460401 cri.go:89] found id: ""
	I1129 09:01:40.159101  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.159112  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:40.159120  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:40.159178  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:40.191945  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:40.191973  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:40.191979  460401 cri.go:89] found id: ""
	I1129 09:01:40.191990  460401 logs.go:282] 2 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:40.192055  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.197191  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:40.202276  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:40.202350  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:40.236481  460401 cri.go:89] found id: ""
	I1129 09:01:40.236510  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.236521  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:40.236528  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:40.236597  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:40.266476  460401 cri.go:89] found id: ""
	I1129 09:01:40.266505  460401 logs.go:282] 0 containers: []
	W1129 09:01:40.266516  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:40.266529  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:40.266547  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:40.310670  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:40.310713  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:40.362446  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:40.362487  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:40.399108  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:40.399138  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:40.435770  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:40.435799  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:40.485497  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:40.485541  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:40.502944  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:40.502977  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:40.592582  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:40.592610  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:40.592626  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:40.634792  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:40.634828  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:40.678348  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:40.678382  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:40.797799  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:40.797849  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:40.854148  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:40.854196  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:43.404360  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:43.404858  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:01:43.404925  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:43.404996  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:43.435800  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:43.435836  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:43.435843  460401 cri.go:89] found id: ""
	I1129 09:01:43.435854  460401 logs.go:282] 2 containers: [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:43.435923  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.441287  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.445761  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:43.445837  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:43.474830  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:43.474859  460401 cri.go:89] found id: ""
	I1129 09:01:43.474870  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:43.474932  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.481397  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:43.481483  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:43.513967  460401 cri.go:89] found id: ""
	I1129 09:01:43.513995  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.514006  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:43.514014  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:43.514074  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:43.550388  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:43.550416  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:43.550421  460401 cri.go:89] found id: ""
	I1129 09:01:43.550431  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:43.550505  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.557316  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.563173  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:43.563248  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:43.599482  460401 cri.go:89] found id: ""
	I1129 09:01:43.599524  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.599535  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:43.599545  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:43.599611  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:43.637030  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:43.637053  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:43.637059  460401 cri.go:89] found id: ""
	I1129 09:01:43.637069  460401 logs.go:282] 2 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:43.637130  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.643786  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:43.650011  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:43.650089  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:43.687244  460401 cri.go:89] found id: ""
	I1129 09:01:43.687273  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.687295  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:43.687303  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:43.687372  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:43.726453  460401 cri.go:89] found id: ""
	I1129 09:01:43.726490  460401 logs.go:282] 0 containers: []
	W1129 09:01:43.726501  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:43.726515  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:01:43.726533  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:43.795442  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:43.795490  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:43.841417  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:43.841457  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:43.888511  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:43.888554  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:43.930753  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:43.930789  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:44.043358  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:44.043410  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:44.065065  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:44.065107  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:44.112915  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:44.112958  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:44.174077  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:44.174120  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:01:44.247887  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:01:44.247909  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:44.247927  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:44.290842  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:44.290882  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:44.335297  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:44.335330  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:39.522040  494126 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1129 09:01:39.522116  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10.1
	I1129 09:01:39.664265  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1129 09:01:39.664314  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:01:39.664386  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1129 09:01:40.291377  494126 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I1129 09:01:40.291450  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:40.811289  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.146868238s)
	I1129 09:01:40.811331  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1129 09:01:40.811358  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:01:40.811407  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1129 09:01:40.811531  494126 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1129 09:01:40.811570  494126 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:40.811610  494126 ssh_runner.go:195] Run: which crictl
	I1129 09:01:41.858427  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.046983131s)
	I1129 09:01:41.858463  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1129 09:01:41.858488  494126 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:01:41.858484  494126 ssh_runner.go:235] Completed: which crictl: (1.046843529s)
	I1129 09:01:41.858549  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1
	I1129 09:01:41.858557  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:43.352594  494126 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.494004994s)
	I1129 09:01:43.352634  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.12.1: (1.49406142s)
	I1129 09:01:43.352657  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1129 09:01:43.352684  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:01:43.352721  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:43.352741  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1129 09:01:44.495181  494126 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.142420788s)
	I1129 09:01:44.495251  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.142485031s)
	I1129 09:01:44.495274  494126 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:44.495280  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1129 09:01:44.495307  494126 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:01:44.495357  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1129 09:01:44.611298  493486 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.002099 seconds
	I1129 09:01:44.611461  493486 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:01:44.626505  493486 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:01:45.150669  493486 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:01:45.150981  493486 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-295154 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:01:45.666153  493486 kubeadm.go:319] [bootstrap-token] Using token: fc3siq.brm7sjv6bjwb7j34
	I1129 09:01:45.667757  493486 out.go:252]   - Configuring RBAC rules ...
	I1129 09:01:45.667991  493486 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:01:45.673404  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:01:45.685336  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:01:45.691974  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:01:45.695311  493486 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:01:45.698699  493486 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:01:45.712796  493486 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:01:45.913473  493486 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:01:46.081267  493486 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:01:46.081993  493486 kubeadm.go:319] 
	I1129 09:01:46.082087  493486 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:01:46.082095  493486 kubeadm.go:319] 
	I1129 09:01:46.082160  493486 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:01:46.082179  493486 kubeadm.go:319] 
	I1129 09:01:46.082199  493486 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:01:46.082251  493486 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:01:46.082302  493486 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:01:46.082308  493486 kubeadm.go:319] 
	I1129 09:01:46.082372  493486 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:01:46.082377  493486 kubeadm.go:319] 
	I1129 09:01:46.082434  493486 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:01:46.082445  493486 kubeadm.go:319] 
	I1129 09:01:46.082520  493486 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:01:46.082627  493486 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:01:46.082750  493486 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:01:46.082756  493486 kubeadm.go:319] 
	I1129 09:01:46.082891  493486 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:01:46.083019  493486 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:01:46.083030  493486 kubeadm.go:319] 
	I1129 09:01:46.083149  493486 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token fc3siq.brm7sjv6bjwb7j34 \
	I1129 09:01:46.083319  493486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 \
	I1129 09:01:46.083366  493486 kubeadm.go:319] 	--control-plane 
	I1129 09:01:46.083383  493486 kubeadm.go:319] 
	I1129 09:01:46.083539  493486 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:01:46.083561  493486 kubeadm.go:319] 
	I1129 09:01:46.083696  493486 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token fc3siq.brm7sjv6bjwb7j34 \
	I1129 09:01:46.083889  493486 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 
	I1129 09:01:46.087692  493486 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:01:46.087874  493486 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:01:46.087925  493486 cni.go:84] Creating CNI manager for ""
	I1129 09:01:46.087942  493486 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:46.089437  493486 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:01:46.093295  493486 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:01:46.100033  493486 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1129 09:01:46.100061  493486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:01:46.118046  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:01:47.108562  493486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:01:47.108767  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:47.108838  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-295154 minikube.k8s.io/updated_at=2025_11_29T09_01_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=old-k8s-version-295154 minikube.k8s.io/primary=true
	I1129 09:01:47.209163  493486 ops.go:34] apiserver oom_adj: -16
	I1129 09:01:47.209168  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:47.709726  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:48.209857  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:44.521775  494126 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1129 09:01:44.521916  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:01:45.636811  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.141419574s)
	I1129 09:01:45.636849  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1129 09:01:45.636857  494126 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.114924181s)
	I1129 09:01:45.636879  494126 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1129 09:01:45.636882  494126 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:01:45.636902  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1129 09:01:45.636924  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1129 09:01:48.452908  494126 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (2.815950505s)
	I1129 09:01:48.452936  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1129 09:01:48.452972  494126 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:01:48.453041  494126 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1129 09:01:49.370622  494126 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22000-255825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1129 09:01:49.370663  494126 cache_images.go:125] Successfully loaded all cached images
	I1129 09:01:49.370668  494126 cache_images.go:94] duration metric: took 10.495116704s to LoadCachedImages
	I1129 09:01:49.370682  494126 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 containerd true true} ...
	I1129 09:01:49.370811  494126 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-924441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:01:49.370873  494126 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:01:49.397690  494126 cni.go:84] Creating CNI manager for ""
	I1129 09:01:49.397714  494126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:01:49.397740  494126 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:01:49.397786  494126 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-924441 NodeName:no-preload-924441 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:01:49.397929  494126 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-924441"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:01:49.397999  494126 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:01:49.407101  494126 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1129 09:01:49.407180  494126 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1129 09:01:49.415958  494126 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1129 09:01:49.415978  494126 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256
	I1129 09:01:49.416026  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:01:49.416047  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1129 09:01:49.415978  494126 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256
	I1129 09:01:49.416149  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1129 09:01:49.429834  494126 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1129 09:01:49.429872  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1129 09:01:49.429915  494126 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1129 09:01:49.429924  494126 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1129 09:01:49.429943  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1129 09:01:49.438987  494126 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1129 09:01:49.439024  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1129 09:01:46.884140  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:01:48.710027  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:49.210030  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:49.709395  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:50.209866  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:50.709354  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:51.209979  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:51.710291  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:52.209895  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:52.709970  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:53.209937  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:49.969644  494126 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:01:49.978574  494126 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1129 09:01:49.992833  494126 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:01:50.009876  494126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2232 bytes)
	I1129 09:01:50.023695  494126 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:01:50.027747  494126 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:01:50.038376  494126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:50.121247  494126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:01:50.149394  494126 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441 for IP: 192.168.103.2
	I1129 09:01:50.149417  494126 certs.go:195] generating shared ca certs ...
	I1129 09:01:50.149438  494126 certs.go:227] acquiring lock for ca certs: {Name:mk5e6bcae0a6944966b241f3c6197a472703c991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.149602  494126 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key
	I1129 09:01:50.149703  494126 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key
	I1129 09:01:50.149717  494126 certs.go:257] generating profile certs ...
	I1129 09:01:50.149797  494126 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.key
	I1129 09:01:50.149812  494126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt with IP's: []
	I1129 09:01:50.352856  494126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt ...
	I1129 09:01:50.352896  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: {Name:mk24ad5255d5c075502606493622eaafcc9932fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.353102  494126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.key ...
	I1129 09:01:50.353115  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.key: {Name:mkdb2263ef25fafc1ea0385357022f8199c8aa35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.353223  494126 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b
	I1129 09:01:50.353240  494126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1129 09:01:50.513341  494126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b ...
	I1129 09:01:50.513379  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b: {Name:mk3f760c06958b6df21bcc9bde3527a0c97ad882 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.513582  494126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b ...
	I1129 09:01:50.513601  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b: {Name:mk4c8be15a8f6eca407c52c7afdc7ecb10357a29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.513678  494126 certs.go:382] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt.f72e5c7b -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt
	I1129 09:01:50.513771  494126 certs.go:386] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key.f72e5c7b -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key
	I1129 09:01:50.513831  494126 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key
	I1129 09:01:50.513847  494126 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt with IP's: []
	I1129 09:01:50.651114  494126 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt ...
	I1129 09:01:50.651146  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt: {Name:mkbdace4e62ecdfbe11ae904155295b956ffc842 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.651330  494126 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key ...
	I1129 09:01:50.651343  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key: {Name:mk14d837fb2449197c689047daf9f07db1da4b8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:50.651522  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem (1338 bytes)
	W1129 09:01:50.651563  494126 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483_empty.pem, impossibly tiny 0 bytes
	I1129 09:01:50.651573  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:01:50.651652  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:01:50.651691  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:01:50.651714  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem (1679 bytes)
	I1129 09:01:50.651769  494126 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:01:50.652337  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:01:50.672071  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:01:50.691184  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:01:50.711306  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:01:50.730860  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:01:50.750662  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1129 09:01:50.771690  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:01:50.791789  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1129 09:01:50.811356  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem --> /usr/share/ca-certificates/259483.pem (1338 bytes)
	I1129 09:01:50.833983  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /usr/share/ca-certificates/2594832.pem (1708 bytes)
	I1129 09:01:50.853036  494126 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:01:50.871262  494126 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:01:50.885099  494126 ssh_runner.go:195] Run: openssl version
	I1129 09:01:50.892072  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259483.pem && ln -fs /usr/share/ca-certificates/259483.pem /etc/ssl/certs/259483.pem"
	I1129 09:01:50.901864  494126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259483.pem
	I1129 09:01:50.906616  494126 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:35 /usr/share/ca-certificates/259483.pem
	I1129 09:01:50.906675  494126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259483.pem
	I1129 09:01:50.943595  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259483.pem /etc/ssl/certs/51391683.0"
	I1129 09:01:50.953459  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2594832.pem && ln -fs /usr/share/ca-certificates/2594832.pem /etc/ssl/certs/2594832.pem"
	I1129 09:01:50.962610  494126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2594832.pem
	I1129 09:01:50.966703  494126 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:35 /usr/share/ca-certificates/2594832.pem
	I1129 09:01:50.966778  494126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2594832.pem
	I1129 09:01:51.002253  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2594832.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:01:51.012487  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:01:51.022391  494126 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:51.026710  494126 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:51.026814  494126 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:01:51.063394  494126 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:01:51.073278  494126 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:01:51.077328  494126 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:01:51.077396  494126 kubeadm.go:401] StartCluster: {Name:no-preload-924441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-924441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:01:51.077489  494126 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:01:51.077532  494126 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:01:51.106096  494126 cri.go:89] found id: ""
	I1129 09:01:51.106183  494126 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:01:51.115333  494126 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:01:51.123937  494126 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:01:51.124003  494126 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:01:51.132534  494126 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:01:51.132560  494126 kubeadm.go:158] found existing configuration files:
	
	I1129 09:01:51.132605  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:01:51.140877  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:01:51.140937  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:01:51.149370  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:01:51.157660  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:01:51.157716  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:01:51.165600  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:01:51.173968  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:01:51.174023  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:01:51.182141  494126 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:01:51.190488  494126 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:01:51.190548  494126 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:01:51.198568  494126 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:01:51.257848  494126 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1129 09:01:51.317135  494126 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 09:01:51.885035  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1129 09:01:51.885110  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:01:51.885188  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:01:51.917617  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:01:51.917638  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:51.917644  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:51.917647  460401 cri.go:89] found id: ""
	I1129 09:01:51.917655  460401 logs.go:282] 3 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:01:51.917717  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.923877  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.929304  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.934465  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:01:51.934561  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:01:51.963685  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:51.963708  460401 cri.go:89] found id: ""
	I1129 09:01:51.963719  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:01:51.963801  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:51.968956  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:01:51.969028  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:01:51.996971  460401 cri.go:89] found id: ""
	I1129 09:01:51.997000  460401 logs.go:282] 0 containers: []
	W1129 09:01:51.997007  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:01:51.997013  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:01:51.997078  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:01:52.028822  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:01:52.028850  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:52.028856  460401 cri.go:89] found id: ""
	I1129 09:01:52.028866  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:01:52.028936  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.034812  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.039943  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:01:52.040009  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:01:52.069835  460401 cri.go:89] found id: ""
	I1129 09:01:52.069866  460401 logs.go:282] 0 containers: []
	W1129 09:01:52.069878  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:01:52.069886  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:01:52.069952  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:01:52.104321  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:52.104340  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:52.104344  460401 cri.go:89] found id: ""
	I1129 09:01:52.104352  460401 logs.go:282] 2 containers: [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:01:52.104402  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.109901  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:01:52.114778  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:01:52.114862  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:01:52.144981  460401 cri.go:89] found id: ""
	I1129 09:01:52.145005  460401 logs.go:282] 0 containers: []
	W1129 09:01:52.145013  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:01:52.145019  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:01:52.145069  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:01:52.174604  460401 cri.go:89] found id: ""
	I1129 09:01:52.174632  460401 logs.go:282] 0 containers: []
	W1129 09:01:52.174641  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:01:52.174651  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:01:52.174665  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:01:52.207427  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:01:52.207458  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:01:52.249558  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:01:52.249600  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:01:52.300742  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:01:52.300785  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:01:52.385321  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:01:52.385365  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:01:52.405491  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:01:52.405533  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:01:52.448465  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:01:52.448502  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:01:52.489466  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:01:52.489506  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:01:52.534107  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:01:52.534146  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:01:52.572361  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:01:52.572401  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:01:52.606656  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:01:52.606692  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1129 09:01:53.710005  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:54.209471  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:54.709414  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:55.209967  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:55.709378  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:56.210032  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:56.709982  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:57.209266  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:57.709968  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:58.209425  493486 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:01:58.303052  493486 kubeadm.go:1114] duration metric: took 11.19438409s to wait for elevateKubeSystemPrivileges
	I1129 09:01:58.303107  493486 kubeadm.go:403] duration metric: took 21.598001105s to StartCluster
	I1129 09:01:58.303162  493486 settings.go:142] acquiring lock: {Name:mk6dbed29e5e99d89b1cbbd9e561d8f8791ae9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:58.303278  493486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:01:58.305561  493486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:01:58.305924  493486 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:01:58.306112  493486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:01:58.306351  493486 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:01:58.306713  493486 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-295154"
	I1129 09:01:58.306776  493486 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-295154"
	I1129 09:01:58.306795  493486 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-295154"
	I1129 09:01:58.306776  493486 config.go:182] Loaded profile config "old-k8s-version-295154": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1129 09:01:58.306807  493486 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-295154"
	I1129 09:01:58.306834  493486 host.go:66] Checking if "old-k8s-version-295154" exists ...
	I1129 09:01:58.307864  493486 out.go:179] * Verifying Kubernetes components...
	I1129 09:01:58.307930  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:58.308039  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:58.309327  493486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:01:58.335085  493486 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-295154"
	I1129 09:01:58.335144  493486 host.go:66] Checking if "old-k8s-version-295154" exists ...
	I1129 09:01:58.335642  493486 cli_runner.go:164] Run: docker container inspect old-k8s-version-295154 --format={{.State.Status}}
	I1129 09:01:58.337139  493486 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:01:58.338693  493486 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:01:58.338716  493486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:01:58.338899  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:58.368947  493486 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:01:58.368979  493486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:01:58.369072  493486 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-295154
	I1129 09:01:58.378680  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:58.399464  493486 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/old-k8s-version-295154/id_rsa Username:docker}
	I1129 09:01:58.438617  493486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:01:58.498671  493486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:01:58.528524  493486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:01:58.536443  493486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:01:58.718007  493486 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1129 09:01:58.719713  493486 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-295154" to be "Ready" ...
	I1129 09:01:58.976512  493486 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1129 09:02:01.574795  494126 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:02:01.574869  494126 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:02:01.575071  494126 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:02:01.575154  494126 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:02:01.575204  494126 kubeadm.go:319] OS: Linux
	I1129 09:02:01.575304  494126 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:02:01.575403  494126 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:02:01.575496  494126 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:02:01.575567  494126 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:02:01.575645  494126 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:02:01.575713  494126 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:02:01.575809  494126 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:02:01.575872  494126 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:02:01.575964  494126 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:02:01.576092  494126 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:02:01.576217  494126 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:02:01.576325  494126 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:02:01.578171  494126 out.go:252]   - Generating certificates and keys ...
	I1129 09:02:01.578298  494126 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:02:01.578401  494126 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:02:01.578499  494126 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:02:01.578589  494126 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:02:01.578680  494126 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:02:01.578785  494126 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:02:01.578876  494126 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:02:01.579019  494126 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-924441] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1129 09:02:01.579122  494126 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:02:01.579311  494126 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-924441] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1129 09:02:01.579420  494126 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:02:01.579532  494126 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:02:01.579609  494126 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:02:01.579696  494126 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:02:01.579806  494126 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:02:01.579894  494126 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:02:01.579971  494126 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:02:01.580076  494126 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:02:01.580125  494126 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:02:01.580195  494126 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:02:01.580259  494126 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:02:01.582121  494126 out.go:252]   - Booting up control plane ...
	I1129 09:02:01.582267  494126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:02:01.582364  494126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:02:01.582460  494126 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:02:01.582603  494126 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:02:01.582773  494126 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:02:01.582902  494126 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:02:01.583026  494126 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:02:01.583068  494126 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:02:01.583182  494126 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:02:01.583325  494126 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:02:01.583413  494126 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001845652s
	I1129 09:02:01.583537  494126 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:02:01.583671  494126 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1129 09:02:01.583787  494126 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:02:01.583879  494126 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:02:01.583985  494126 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.852889014s
	I1129 09:02:01.584071  494126 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.023243656s
	I1129 09:02:01.584163  494126 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00195345s
	I1129 09:02:01.584314  494126 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 09:02:01.584493  494126 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 09:02:01.584584  494126 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 09:02:01.584867  494126 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-924441 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 09:02:01.584955  494126 kubeadm.go:319] [bootstrap-token] Using token: mvtuq7.pg2byk8o9fh5nfa2
	I1129 09:02:01.587787  494126 out.go:252]   - Configuring RBAC rules ...
	I1129 09:02:01.587916  494126 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 09:02:01.588028  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 09:02:01.588232  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 09:02:01.588384  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 09:02:01.588517  494126 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 09:02:01.588635  494126 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 09:02:01.588779  494126 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 09:02:01.588837  494126 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 09:02:01.588907  494126 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 09:02:01.588916  494126 kubeadm.go:319] 
	I1129 09:02:01.589016  494126 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 09:02:01.589032  494126 kubeadm.go:319] 
	I1129 09:02:01.589151  494126 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 09:02:01.589160  494126 kubeadm.go:319] 
	I1129 09:02:01.589205  494126 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 09:02:01.589280  494126 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 09:02:01.589374  494126 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 09:02:01.589388  494126 kubeadm.go:319] 
	I1129 09:02:01.589465  494126 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 09:02:01.589473  494126 kubeadm.go:319] 
	I1129 09:02:01.589554  494126 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 09:02:01.589563  494126 kubeadm.go:319] 
	I1129 09:02:01.589607  494126 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 09:02:01.589671  494126 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 09:02:01.589782  494126 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 09:02:01.589795  494126 kubeadm.go:319] 
	I1129 09:02:01.589906  494126 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 09:02:01.590049  494126 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 09:02:01.590058  494126 kubeadm.go:319] 
	I1129 09:02:01.590132  494126 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token mvtuq7.pg2byk8o9fh5nfa2 \
	I1129 09:02:01.590268  494126 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 \
	I1129 09:02:01.590302  494126 kubeadm.go:319] 	--control-plane 
	I1129 09:02:01.590309  494126 kubeadm.go:319] 
	I1129 09:02:01.590425  494126 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 09:02:01.590434  494126 kubeadm.go:319] 
	I1129 09:02:01.590567  494126 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token mvtuq7.pg2byk8o9fh5nfa2 \
	I1129 09:02:01.590744  494126 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:cfb13a4080e942b53ddf5e01885fcdd270ac918e177076400130991e2b6b7778 
	I1129 09:02:01.590761  494126 cni.go:84] Creating CNI manager for ""
	I1129 09:02:01.590770  494126 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:02:01.592367  494126 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1129 09:01:58.977447  493486 addons.go:530] duration metric: took 671.096745ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:01:59.226693  493486 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-295154" context rescaled to 1 replicas
	W1129 09:02:00.723077  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	W1129 09:02:02.723240  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	I1129 09:02:01.593492  494126 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:02:01.598544  494126 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:02:01.598567  494126 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:02:01.615144  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:02:01.883935  494126 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:02:01.884024  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:01.884114  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-924441 minikube.k8s.io/updated_at=2025_11_29T09_02_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=no-preload-924441 minikube.k8s.io/primary=true
	I1129 09:02:01.969638  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:01.982178  494126 ops.go:34] apiserver oom_adj: -16
	I1129 09:02:02.470301  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:02.969878  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:03.470379  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:03.970554  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:04.469853  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:02.669495  460401 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062771993s)
	W1129 09:02:02.669547  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1129 09:02:02.669577  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:02.669596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:02.710559  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:02.710605  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:04.970119  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:05.470767  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:05.969852  494126 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:02:06.052010  494126 kubeadm.go:1114] duration metric: took 4.168052566s to wait for elevateKubeSystemPrivileges
	I1129 09:02:06.052057  494126 kubeadm.go:403] duration metric: took 14.974666914s to StartCluster
	I1129 09:02:06.052081  494126 settings.go:142] acquiring lock: {Name:mk6dbed29e5e99d89b1cbbd9e561d8f8791ae9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:02:06.052174  494126 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:02:06.054258  494126 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:02:06.054571  494126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:02:06.054563  494126 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:02:06.054635  494126 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:02:06.054874  494126 config.go:182] Loaded profile config "no-preload-924441": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:02:06.054888  494126 addons.go:70] Setting storage-provisioner=true in profile "no-preload-924441"
	I1129 09:02:06.054933  494126 addons.go:70] Setting default-storageclass=true in profile "no-preload-924441"
	I1129 09:02:06.054947  494126 addons.go:239] Setting addon storage-provisioner=true in "no-preload-924441"
	I1129 09:02:06.054963  494126 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-924441"
	I1129 09:02:06.055012  494126 host.go:66] Checking if "no-preload-924441" exists ...
	I1129 09:02:06.055424  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:02:06.055667  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:02:06.056967  494126 out.go:179] * Verifying Kubernetes components...
	I1129 09:02:06.060417  494126 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:02:06.083076  494126 addons.go:239] Setting addon default-storageclass=true in "no-preload-924441"
	I1129 09:02:06.083127  494126 host.go:66] Checking if "no-preload-924441" exists ...
	I1129 09:02:06.083615  494126 cli_runner.go:164] Run: docker container inspect no-preload-924441 --format={{.State.Status}}
	I1129 09:02:06.086028  494126 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:02:06.087100  494126 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:02:06.087121  494126 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:02:06.087200  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:02:06.110337  494126 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:02:06.110366  494126 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:02:06.111183  494126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-924441
	I1129 09:02:06.116769  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:02:06.140007  494126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/no-preload-924441/id_rsa Username:docker}
	I1129 09:02:06.151655  494126 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:02:06.208406  494126 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:02:06.241470  494126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:02:06.273558  494126 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:02:06.324896  494126 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1129 09:02:06.327889  494126 node_ready.go:35] waiting up to 6m0s for node "no-preload-924441" to be "Ready" ...
	I1129 09:02:06.574594  494126 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1129 09:02:05.223590  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	W1129 09:02:07.223929  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	I1129 09:02:06.575644  494126 addons.go:530] duration metric: took 521.007476ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1129 09:02:06.830448  494126 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-924441" context rescaled to 1 replicas
	W1129 09:02:08.331406  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:05.259668  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:07.201576  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:43246->192.168.85.2:8443: read: connection reset by peer
	I1129 09:02:07.201690  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:07.201778  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:07.234753  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:07.234781  460401 cri.go:89] found id: "5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	I1129 09:02:07.234788  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:07.234793  460401 cri.go:89] found id: ""
	I1129 09:02:07.234804  460401 logs.go:282] 3 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:07.234869  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.240257  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.245641  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.251131  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:07.251196  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:07.280579  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:07.280608  460401 cri.go:89] found id: ""
	I1129 09:02:07.280621  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:07.280682  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.286123  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:07.286213  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:07.317491  460401 cri.go:89] found id: ""
	I1129 09:02:07.317519  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.317528  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:07.317534  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:07.317586  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:07.347513  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:07.347534  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:07.347538  460401 cri.go:89] found id: ""
	I1129 09:02:07.347546  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:07.347610  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.353144  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.358223  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:07.358303  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:07.387488  460401 cri.go:89] found id: ""
	I1129 09:02:07.387516  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.387525  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:07.387532  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:07.387595  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:07.418490  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:07.418512  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:07.418516  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:07.418519  460401 cri.go:89] found id: ""
	I1129 09:02:07.418527  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:07.418587  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.423956  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.429140  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:07.434196  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:07.434281  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:07.463114  460401 cri.go:89] found id: ""
	I1129 09:02:07.463138  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.463148  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:07.463156  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:07.463222  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:07.494533  460401 cri.go:89] found id: ""
	I1129 09:02:07.494567  460401 logs.go:282] 0 containers: []
	W1129 09:02:07.494579  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:07.494592  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:07.494604  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:07.546238  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:07.546282  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:07.634664  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:07.634702  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:07.696753  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:07.696779  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:07.696796  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:07.733303  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:07.733343  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:07.786770  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:07.786809  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:07.824791  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:07.824831  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:07.857029  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:07.857058  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:07.892009  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:07.892046  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:07.907552  460401 logs.go:123] Gathering logs for kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095] ...
	I1129 09:02:07.907596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	W1129 09:02:07.937558  460401 logs.go:130] failed kube-apiserver [5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095": Process exited with status 1
	stdout:
	
	stderr:
	E1129 09:02:07.934436    4413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found" containerID="5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	time="2025-11-29T09:02:07Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found"
	 output: 
	** stderr ** 
	E1129 09:02:07.934436    4413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found" containerID="5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095"
	time="2025-11-29T09:02:07Z" level=fatal msg="rpc error: code = NotFound desc = an error occurred when try to find container \"5e7b60288765099d1aa5333e90b5c31c9314dff5f9864968413148621de30095\": not found"
	
	** /stderr **
	I1129 09:02:07.937577  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:07.937591  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:07.976501  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:07.976553  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:08.017968  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:08.018008  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:08.049057  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:08.049090  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	W1129 09:02:09.723662  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	W1129 09:02:12.223024  493486 node_ready.go:57] node "old-k8s-version-295154" has "Ready":"False" status (will retry)
	I1129 09:02:13.224090  493486 node_ready.go:49] node "old-k8s-version-295154" is "Ready"
	I1129 09:02:13.224128  493486 node_ready.go:38] duration metric: took 14.504358398s for node "old-k8s-version-295154" to be "Ready" ...
	I1129 09:02:13.224148  493486 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:02:13.224211  493486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:02:13.243313  493486 api_server.go:72] duration metric: took 14.93733902s to wait for apiserver process to appear ...
	I1129 09:02:13.243343  493486 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:02:13.243370  493486 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1129 09:02:13.250694  493486 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1129 09:02:13.251984  493486 api_server.go:141] control plane version: v1.28.0
	I1129 09:02:13.252015  493486 api_server.go:131] duration metric: took 8.663278ms to wait for apiserver health ...
	I1129 09:02:13.252026  493486 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:02:13.255767  493486 system_pods.go:59] 8 kube-system pods found
	I1129 09:02:13.255813  493486 system_pods.go:61] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.255822  493486 system_pods.go:61] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.255829  493486 system_pods.go:61] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.255835  493486 system_pods.go:61] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.255841  493486 system_pods.go:61] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.255847  493486 system_pods.go:61] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.255853  493486 system_pods.go:61] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.255860  493486 system_pods.go:61] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.255869  493486 system_pods.go:74] duration metric: took 3.834915ms to wait for pod list to return data ...
	I1129 09:02:13.255879  493486 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:02:13.259936  493486 default_sa.go:45] found service account: "default"
	I1129 09:02:13.259965  493486 default_sa.go:55] duration metric: took 4.078247ms for default service account to be created ...
	I1129 09:02:13.259977  493486 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:02:13.264489  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:13.264528  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.264536  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.264545  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.264554  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.264562  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.264567  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.264572  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.264586  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.264615  493486 retry.go:31] will retry after 309.906184ms: missing components: kube-dns
	W1129 09:02:10.832100  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	W1129 09:02:13.330706  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:10.584596  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:10.585082  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:10.585139  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:10.585192  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:10.615813  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:10.615833  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:10.615837  460401 cri.go:89] found id: ""
	I1129 09:02:10.615846  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:10.615910  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.621079  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.625927  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:10.626017  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:10.655780  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:10.655808  460401 cri.go:89] found id: ""
	I1129 09:02:10.655817  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:10.655877  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.661197  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:10.661278  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:10.692401  460401 cri.go:89] found id: ""
	I1129 09:02:10.692423  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.692431  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:10.692436  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:10.692496  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:10.721278  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:10.721303  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:10.721309  460401 cri.go:89] found id: ""
	I1129 09:02:10.721320  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:10.721387  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.726913  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.731556  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:10.731637  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:10.759345  460401 cri.go:89] found id: ""
	I1129 09:02:10.759373  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.759381  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:10.759386  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:10.759446  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:10.790190  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:10.790215  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:10.790221  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:10.790226  460401 cri.go:89] found id: ""
	I1129 09:02:10.790236  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:10.790305  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.795588  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.800622  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:10.805263  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:10.805338  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:10.834942  460401 cri.go:89] found id: ""
	I1129 09:02:10.834973  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.834991  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:10.834999  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:10.835065  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:10.872503  460401 cri.go:89] found id: ""
	I1129 09:02:10.872536  460401 logs.go:282] 0 containers: []
	W1129 09:02:10.872547  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:10.872562  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:10.872586  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:10.926644  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:10.926681  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:10.965025  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:10.965069  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:10.998068  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:10.998102  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:11.043686  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:11.043743  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:11.134380  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:11.134422  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:11.150475  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:11.150510  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:11.210329  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:11.210348  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:11.210364  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:11.250422  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:11.250457  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:11.280219  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:11.280255  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:11.315565  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:11.315596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:11.349327  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:11.349358  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:11.384696  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:11.384729  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:13.923850  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:13.924341  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:13.924398  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:13.924461  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:13.954410  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:13.954430  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:13.954434  460401 cri.go:89] found id: ""
	I1129 09:02:13.954442  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:13.954501  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:13.959624  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:13.964312  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:13.964377  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:13.992596  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:13.992625  460401 cri.go:89] found id: ""
	I1129 09:02:13.992636  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:13.992703  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:13.998893  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:13.998972  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:14.028106  460401 cri.go:89] found id: ""
	I1129 09:02:14.028140  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.028152  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:14.028161  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:14.028230  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:14.057393  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:14.057414  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:14.057418  460401 cri.go:89] found id: ""
	I1129 09:02:14.057427  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:14.057482  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.062623  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.067579  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:14.067654  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:14.102801  460401 cri.go:89] found id: ""
	I1129 09:02:14.102840  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.102853  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:14.102860  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:14.102925  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:14.135951  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:14.135979  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:14.135985  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:14.135988  460401 cri.go:89] found id: ""
	I1129 09:02:14.135998  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:14.136064  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.141983  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.147316  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:14.152463  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:14.152555  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:14.181365  460401 cri.go:89] found id: ""
	I1129 09:02:14.181398  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.181409  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:14.181417  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:14.181477  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:14.210267  460401 cri.go:89] found id: ""
	I1129 09:02:14.210292  460401 logs.go:282] 0 containers: []
	W1129 09:02:14.210300  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:14.210310  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:14.210323  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:14.298625  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:14.298662  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:14.315504  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:14.315529  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:14.357098  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:14.357134  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:14.407082  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:14.407133  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:14.441442  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:14.441482  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:14.476419  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:14.476452  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:13.579150  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:13.579183  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.579189  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.579195  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.579199  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.579203  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.579206  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.579210  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.579220  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.579237  493486 retry.go:31] will retry after 360.039109ms: missing components: kube-dns
	I1129 09:02:13.944039  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:13.944084  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:13.944094  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:13.944104  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:13.944110  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:13.944116  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:13.944121  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:13.944127  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:13.944133  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:13.944166  493486 retry.go:31] will retry after 339.658127ms: missing components: kube-dns
	I1129 09:02:14.288499  493486 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:14.288533  493486 system_pods.go:89] "coredns-5dd5756b68-phw28" [7fc2b8dd-43dd-43df-8887-9ffa6de36fb4] Running
	I1129 09:02:14.288543  493486 system_pods.go:89] "etcd-old-k8s-version-295154" [b49cf7c8-8d72-4db9-a96f-d796fd8d9e08] Running
	I1129 09:02:14.288548  493486 system_pods.go:89] "kindnet-k4n9l" [74cdf2cd-3f3a-4be5-9a9f-6d0b67090fb8] Running
	I1129 09:02:14.288553  493486 system_pods.go:89] "kube-apiserver-old-k8s-version-295154" [e4ca0771-197f-4d77-97f0-7a7778e227de] Running
	I1129 09:02:14.288563  493486 system_pods.go:89] "kube-controller-manager-old-k8s-version-295154" [6825ac68-da0d-474d-ac97-53398adffd73] Running
	I1129 09:02:14.288568  493486 system_pods.go:89] "kube-proxy-4rfb4" [05ef67c3-0d6e-453d-a0e5-81c649c3e033] Running
	I1129 09:02:14.288573  493486 system_pods.go:89] "kube-scheduler-old-k8s-version-295154" [97d5e6fb-5cb8-4a03-a8df-3f76df5b2671] Running
	I1129 09:02:14.288578  493486 system_pods.go:89] "storage-provisioner" [359871fd-a77c-430a-87c1-b313992718e2] Running
	I1129 09:02:14.288588  493486 system_pods.go:126] duration metric: took 1.028603527s to wait for k8s-apps to be running ...
	I1129 09:02:14.288601  493486 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:02:14.288662  493486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:02:14.302535  493486 system_svc.go:56] duration metric: took 13.922382ms WaitForService to wait for kubelet
	I1129 09:02:14.302570  493486 kubeadm.go:587] duration metric: took 15.996603485s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:02:14.302594  493486 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:02:14.305508  493486 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:02:14.305535  493486 node_conditions.go:123] node cpu capacity is 8
	I1129 09:02:14.305552  493486 node_conditions.go:105] duration metric: took 2.953214ms to run NodePressure ...
	I1129 09:02:14.305564  493486 start.go:242] waiting for startup goroutines ...
	I1129 09:02:14.305570  493486 start.go:247] waiting for cluster config update ...
	I1129 09:02:14.305583  493486 start.go:256] writing updated cluster config ...
	I1129 09:02:14.305887  493486 ssh_runner.go:195] Run: rm -f paused
	I1129 09:02:14.309803  493486 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:14.314558  493486 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-phw28" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.319446  493486 pod_ready.go:94] pod "coredns-5dd5756b68-phw28" is "Ready"
	I1129 09:02:14.319479  493486 pod_ready.go:86] duration metric: took 4.889509ms for pod "coredns-5dd5756b68-phw28" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.322499  493486 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.326608  493486 pod_ready.go:94] pod "etcd-old-k8s-version-295154" is "Ready"
	I1129 09:02:14.326631  493486 pod_ready.go:86] duration metric: took 4.109693ms for pod "etcd-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.329352  493486 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.333844  493486 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-295154" is "Ready"
	I1129 09:02:14.333867  493486 pod_ready.go:86] duration metric: took 4.49688ms for pod "kube-apiserver-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.336686  493486 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.714439  493486 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-295154" is "Ready"
	I1129 09:02:14.714472  493486 pod_ready.go:86] duration metric: took 377.765984ms for pod "kube-controller-manager-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:14.915822  493486 pod_ready.go:83] waiting for pod "kube-proxy-4rfb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.314552  493486 pod_ready.go:94] pod "kube-proxy-4rfb4" is "Ready"
	I1129 09:02:15.314586  493486 pod_ready.go:86] duration metric: took 398.736001ms for pod "kube-proxy-4rfb4" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.515989  493486 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.913869  493486 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-295154" is "Ready"
	I1129 09:02:15.913896  493486 pod_ready.go:86] duration metric: took 397.877691ms for pod "kube-scheduler-old-k8s-version-295154" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:15.913908  493486 pod_ready.go:40] duration metric: took 1.604073956s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:15.959941  493486 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1129 09:02:15.961883  493486 out.go:203] 
	W1129 09:02:15.963183  493486 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1129 09:02:15.964449  493486 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1129 09:02:15.966035  493486 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-295154" cluster and "default" namespace by default
	W1129 09:02:15.330798  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	W1129 09:02:17.331851  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:14.509454  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:14.509484  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:14.571273  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:14.571298  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:14.571312  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:14.605440  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:14.605476  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:14.642678  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:14.642712  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:14.671483  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:14.671514  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:14.701619  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:14.701647  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:17.246912  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:17.247337  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:17.247422  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:17.247479  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:17.277610  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:17.277632  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:17.277637  460401 cri.go:89] found id: ""
	I1129 09:02:17.277647  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:17.277711  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.283531  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.288554  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:17.288644  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:17.316819  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:17.316847  460401 cri.go:89] found id: ""
	I1129 09:02:17.316857  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:17.316921  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.322640  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:17.322770  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:17.353531  460401 cri.go:89] found id: ""
	I1129 09:02:17.353563  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.353575  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:17.353585  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:17.353651  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:17.384830  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:17.384854  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:17.384858  460401 cri.go:89] found id: ""
	I1129 09:02:17.384867  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:17.384932  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.390132  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.395096  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:17.395177  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:17.425643  460401 cri.go:89] found id: ""
	I1129 09:02:17.425681  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.425692  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:17.425704  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:17.425788  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:17.456077  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:17.456105  460401 cri.go:89] found id: "2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:17.456113  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:17.456136  460401 cri.go:89] found id: ""
	I1129 09:02:17.456148  460401 logs.go:282] 3 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:17.456213  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.461610  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.466727  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:17.471762  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:17.471849  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:17.501750  460401 cri.go:89] found id: ""
	I1129 09:02:17.501782  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.501793  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:17.501801  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:17.501868  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:17.531903  460401 cri.go:89] found id: ""
	I1129 09:02:17.531932  460401 logs.go:282] 0 containers: []
	W1129 09:02:17.531942  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:17.531956  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:17.531972  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:17.630517  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:17.630566  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:17.667169  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:17.667205  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:17.707311  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:17.707360  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:17.746580  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:17.746621  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:17.799162  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:17.799207  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:17.839313  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:17.839355  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:17.872700  460401 logs.go:123] Gathering logs for kube-controller-manager [2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a] ...
	I1129 09:02:17.872742  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2f891797b465edfa86c2546293600d895d1c61c2f2a00d85b8482ff1b20cb71a"
	I1129 09:02:17.904806  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:17.904838  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:17.920866  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:17.920904  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:17.983002  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:17.983027  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:17.983040  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:18.019203  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:18.019241  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:18.070893  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:18.070936  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1129 09:02:19.830479  494126 node_ready.go:57] node "no-preload-924441" has "Ready":"False" status (will retry)
	I1129 09:02:20.833313  494126 node_ready.go:49] node "no-preload-924441" is "Ready"
	I1129 09:02:20.833355  494126 node_ready.go:38] duration metric: took 14.505431475s for node "no-preload-924441" to be "Ready" ...
	I1129 09:02:20.833377  494126 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:02:20.833445  494126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:02:20.850134  494126 api_server.go:72] duration metric: took 14.795523765s to wait for apiserver process to appear ...
	I1129 09:02:20.850165  494126 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:02:20.850190  494126 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1129 09:02:20.856514  494126 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1129 09:02:20.857900  494126 api_server.go:141] control plane version: v1.34.1
	I1129 09:02:20.857933  494126 api_server.go:131] duration metric: took 7.759312ms to wait for apiserver health ...
	I1129 09:02:20.857945  494126 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:02:20.861811  494126 system_pods.go:59] 8 kube-system pods found
	I1129 09:02:20.861851  494126 system_pods.go:61] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:20.861863  494126 system_pods.go:61] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:20.861871  494126 system_pods.go:61] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:20.861877  494126 system_pods.go:61] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:20.861892  494126 system_pods.go:61] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:20.861897  494126 system_pods.go:61] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:20.861902  494126 system_pods.go:61] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:20.861912  494126 system_pods.go:61] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:20.861920  494126 system_pods.go:74] duration metric: took 3.967151ms to wait for pod list to return data ...
	I1129 09:02:20.861931  494126 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:02:20.864542  494126 default_sa.go:45] found service account: "default"
	I1129 09:02:20.864569  494126 default_sa.go:55] duration metric: took 2.631761ms for default service account to be created ...
	I1129 09:02:20.864581  494126 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:02:20.867876  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:20.867913  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:20.867924  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:20.867932  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:20.867938  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:20.867999  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:20.868005  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:20.868011  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:20.868027  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:20.868077  494126 retry.go:31] will retry after 292.54579ms: missing components: kube-dns
	I1129 09:02:21.165357  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:21.165399  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:21.165408  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:21.165416  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:21.165422  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:21.165428  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:21.165434  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:21.165439  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:21.165449  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:21.165470  494126 retry.go:31] will retry after 336.406198ms: missing components: kube-dns
	I1129 09:02:21.505471  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:21.505510  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:02:21.505516  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:21.505524  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:21.505528  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:21.505531  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:21.505534  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:21.505538  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:21.505542  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:02:21.505560  494126 retry.go:31] will retry after 447.535618ms: missing components: kube-dns
	I1129 09:02:21.957409  494126 system_pods.go:86] 8 kube-system pods found
	I1129 09:02:21.957439  494126 system_pods.go:89] "coredns-66bc5c9577-nsh8w" [bf2a8ab9-aaca-4ee6-a390-a02099f693d9] Running
	I1129 09:02:21.957444  494126 system_pods.go:89] "etcd-no-preload-924441" [e3cda1b0-1ca8-4ded-a506-f728fc050781] Running
	I1129 09:02:21.957448  494126 system_pods.go:89] "kindnet-nscfk" [052c2152-0369-4121-b2fe-25b79a00145a] Running
	I1129 09:02:21.957451  494126 system_pods.go:89] "kube-apiserver-no-preload-924441" [08168b39-5d95-4d6b-ac99-3c6ee50a2530] Running
	I1129 09:02:21.957456  494126 system_pods.go:89] "kube-controller-manager-no-preload-924441" [9e84b562-ff11-40c1-a7ab-3682dbbae4be] Running
	I1129 09:02:21.957459  494126 system_pods.go:89] "kube-proxy-96fcg" [c9fd8592-2ec4-4da3-a800-b136c118d379] Running
	I1129 09:02:21.957464  494126 system_pods.go:89] "kube-scheduler-no-preload-924441" [91fa5a87-81d7-4b1c-8334-9c5c4fcf8997] Running
	I1129 09:02:21.957467  494126 system_pods.go:89] "storage-provisioner" [88b64cf8-3233-47bb-be31-6f367a8a1433] Running
	I1129 09:02:21.957476  494126 system_pods.go:126] duration metric: took 1.092887723s to wait for k8s-apps to be running ...
	I1129 09:02:21.957498  494126 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:02:21.957549  494126 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:02:21.971582  494126 system_svc.go:56] duration metric: took 14.071974ms WaitForService to wait for kubelet
	I1129 09:02:21.971613  494126 kubeadm.go:587] duration metric: took 15.917009838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:02:21.971632  494126 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:02:21.974426  494126 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:02:21.974453  494126 node_conditions.go:123] node cpu capacity is 8
	I1129 09:02:21.974471  494126 node_conditions.go:105] duration metric: took 2.83418ms to run NodePressure ...
	I1129 09:02:21.974485  494126 start.go:242] waiting for startup goroutines ...
	I1129 09:02:21.974492  494126 start.go:247] waiting for cluster config update ...
	I1129 09:02:21.974502  494126 start.go:256] writing updated cluster config ...
	I1129 09:02:21.974780  494126 ssh_runner.go:195] Run: rm -f paused
	I1129 09:02:21.978967  494126 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:21.982434  494126 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nsh8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.986370  494126 pod_ready.go:94] pod "coredns-66bc5c9577-nsh8w" is "Ready"
	I1129 09:02:21.986395  494126 pod_ready.go:86] duration metric: took 3.939701ms for pod "coredns-66bc5c9577-nsh8w" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.988365  494126 pod_ready.go:83] waiting for pod "etcd-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.991850  494126 pod_ready.go:94] pod "etcd-no-preload-924441" is "Ready"
	I1129 09:02:21.991874  494126 pod_ready.go:86] duration metric: took 3.486388ms for pod "etcd-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.993587  494126 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.997072  494126 pod_ready.go:94] pod "kube-apiserver-no-preload-924441" is "Ready"
	I1129 09:02:21.997092  494126 pod_ready.go:86] duration metric: took 3.484304ms for pod "kube-apiserver-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:21.998698  494126 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:22.382918  494126 pod_ready.go:94] pod "kube-controller-manager-no-preload-924441" is "Ready"
	I1129 09:02:22.382948  494126 pod_ready.go:86] duration metric: took 384.232783ms for pod "kube-controller-manager-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:22.583125  494126 pod_ready.go:83] waiting for pod "kube-proxy-96fcg" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:22.982608  494126 pod_ready.go:94] pod "kube-proxy-96fcg" is "Ready"
	I1129 09:02:22.982639  494126 pod_ready.go:86] duration metric: took 399.48383ms for pod "kube-proxy-96fcg" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:23.184031  494126 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:23.583027  494126 pod_ready.go:94] pod "kube-scheduler-no-preload-924441" is "Ready"
	I1129 09:02:23.583058  494126 pod_ready.go:86] duration metric: took 399.00134ms for pod "kube-scheduler-no-preload-924441" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:02:23.583071  494126 pod_ready.go:40] duration metric: took 1.604064431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:02:23.632822  494126 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:02:23.634677  494126 out.go:179] * Done! kubectl is now configured to use "no-preload-924441" cluster and "default" namespace by default
	I1129 09:02:20.607959  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:20.608406  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:20.608469  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:20.608531  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:20.639116  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:20.639148  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:20.639155  460401 cri.go:89] found id: ""
	I1129 09:02:20.639168  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:20.639240  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.644749  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.649347  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:20.649411  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:20.677383  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:20.677404  460401 cri.go:89] found id: ""
	I1129 09:02:20.677413  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:20.677466  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.682625  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:20.682708  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:20.711021  460401 cri.go:89] found id: ""
	I1129 09:02:20.711050  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.711060  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:20.711070  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:20.711138  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:20.745598  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:20.745626  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:20.745632  460401 cri.go:89] found id: ""
	I1129 09:02:20.745643  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:20.745716  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.751838  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.757804  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:20.757881  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:20.793640  460401 cri.go:89] found id: ""
	I1129 09:02:20.793671  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.793683  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:20.793691  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:20.793792  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:20.830071  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:20.830099  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:20.830104  460401 cri.go:89] found id: ""
	I1129 09:02:20.830114  460401 logs.go:282] 2 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:20.830179  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.837576  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:20.843146  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:20.843225  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:20.883480  460401 cri.go:89] found id: ""
	I1129 09:02:20.883525  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.883536  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:20.883543  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:20.883598  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:20.923499  460401 cri.go:89] found id: ""
	I1129 09:02:20.923532  460401 logs.go:282] 0 containers: []
	W1129 09:02:20.923543  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:20.923557  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:20.923574  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:20.961675  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:20.961713  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:20.996489  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:20.996524  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:21.046535  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:21.046596  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:21.131239  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:21.131286  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:21.192537  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:21.192557  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:21.192573  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:21.227894  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:21.227932  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:21.262592  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:21.262632  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:21.298034  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:21.298076  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:21.313593  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:21.313626  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:21.355840  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:21.355878  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:21.409528  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:21.409570  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:23.946261  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:23.946794  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:23.946872  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:23.946940  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:23.978496  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:23.978521  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:23.978525  460401 cri.go:89] found id: ""
	I1129 09:02:23.978533  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:23.978585  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:23.983820  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:23.988502  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:23.988563  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:24.017479  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:24.017505  460401 cri.go:89] found id: ""
	I1129 09:02:24.017516  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:24.017581  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.022978  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:24.023049  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:24.054017  460401 cri.go:89] found id: ""
	I1129 09:02:24.054042  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.054049  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:24.054055  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:24.054104  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:24.083682  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:24.083704  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:24.083710  460401 cri.go:89] found id: ""
	I1129 09:02:24.083720  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:24.083797  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.089191  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.094144  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:24.094223  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:24.123931  460401 cri.go:89] found id: ""
	I1129 09:02:24.123956  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.123964  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:24.123972  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:24.124032  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:24.158678  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:24.158704  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:24.158710  460401 cri.go:89] found id: ""
	I1129 09:02:24.158721  460401 logs.go:282] 2 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:24.158824  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.164380  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:24.170117  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:24.170196  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:24.202016  460401 cri.go:89] found id: ""
	I1129 09:02:24.202057  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.202066  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:24.202072  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:24.202123  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:24.235359  460401 cri.go:89] found id: ""
	I1129 09:02:24.235388  460401 logs.go:282] 0 containers: []
	W1129 09:02:24.235399  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:24.235412  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:24.235427  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:24.327121  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:24.327167  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:24.380608  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:24.380651  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:24.411895  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:24.411923  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:24.450543  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:24.450575  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:24.500105  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:24.500146  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:24.534213  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:24.534244  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:24.548977  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:24.549027  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:24.610946  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:24.610979  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:24.610995  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:24.646378  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:24.646412  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:24.681683  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:24.681724  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:24.720949  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:24.720984  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:27.257815  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:27.258260  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:27.258319  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:27.258379  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:27.293527  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:27.293551  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:27.293555  460401 cri.go:89] found id: ""
	I1129 09:02:27.293565  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:27.293624  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.299010  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.303563  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:27.303630  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:27.333820  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:27.333843  460401 cri.go:89] found id: ""
	I1129 09:02:27.333854  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:27.333911  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.339591  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:27.339665  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:27.371040  460401 cri.go:89] found id: ""
	I1129 09:02:27.371072  460401 logs.go:282] 0 containers: []
	W1129 09:02:27.371092  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:27.371100  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:27.371156  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:27.404567  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:27.404591  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:27.404598  460401 cri.go:89] found id: ""
	I1129 09:02:27.404609  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:27.404679  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.411018  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.416301  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:27.416384  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:27.448123  460401 cri.go:89] found id: ""
	I1129 09:02:27.448154  460401 logs.go:282] 0 containers: []
	W1129 09:02:27.448166  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:27.448174  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:27.448239  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:27.479204  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:27.479228  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:27.479233  460401 cri.go:89] found id: ""
	I1129 09:02:27.479243  460401 logs.go:282] 2 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:27.479299  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.485023  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:27.490034  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:27.490099  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:27.522830  460401 cri.go:89] found id: ""
	I1129 09:02:27.522862  460401 logs.go:282] 0 containers: []
	W1129 09:02:27.522872  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:27.522880  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:27.522940  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:27.556537  460401 cri.go:89] found id: ""
	I1129 09:02:27.556565  460401 logs.go:282] 0 containers: []
	W1129 09:02:27.556576  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:27.556589  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:27.556606  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:27.573324  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:27.573353  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:27.639338  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:27.639361  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:27.639380  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:27.675020  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:27.675050  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:27.723155  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:27.723191  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:27.762423  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:27.762453  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:27.793598  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:27.793627  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:27.858089  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:27.858122  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:27.895696  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:27.895746  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:28.002060  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:28.002103  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:28.050250  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:28.050287  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:28.108778  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:28.108830  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:30.644794  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:30.645215  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:30.645272  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:30.645330  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:30.681066  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:30.681092  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:30.681098  460401 cri.go:89] found id: ""
	I1129 09:02:30.681107  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:30.681171  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:30.689705  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:30.697481  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:30.697564  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:30.727558  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:30.727578  460401 cri.go:89] found id: ""
	I1129 09:02:30.727586  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:30.727641  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:30.732816  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:30.732890  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:30.764398  460401 cri.go:89] found id: ""
	I1129 09:02:30.764421  460401 logs.go:282] 0 containers: []
	W1129 09:02:30.764429  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:30.764434  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:30.764492  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:30.797992  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:30.798020  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:30.798026  460401 cri.go:89] found id: ""
	I1129 09:02:30.798036  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:30.798103  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:30.804638  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:30.810851  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:30.810945  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:30.846963  460401 cri.go:89] found id: ""
	I1129 09:02:30.846994  460401 logs.go:282] 0 containers: []
	W1129 09:02:30.847006  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:30.847015  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:30.847076  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:30.879924  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:30.879951  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:30.879958  460401 cri.go:89] found id: ""
	I1129 09:02:30.879969  460401 logs.go:282] 2 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:30.880032  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:30.885602  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:30.890269  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:30.890333  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:30.919837  460401 cri.go:89] found id: ""
	I1129 09:02:30.919863  460401 logs.go:282] 0 containers: []
	W1129 09:02:30.919870  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:30.919877  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:30.919929  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:30.955025  460401 cri.go:89] found id: ""
	I1129 09:02:30.955051  460401 logs.go:282] 0 containers: []
	W1129 09:02:30.955060  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:30.955070  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:30.955081  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:02:31.045437  460401 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:02:31.045481  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:02:31.105090  460401 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:02:31.105116  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:31.105132  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:31.140128  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:31.140159  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:31.174858  460401 logs.go:123] Gathering logs for etcd [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625] ...
	I1129 09:02:31.174887  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:31.214907  460401 logs.go:123] Gathering logs for kube-scheduler [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21] ...
	I1129 09:02:31.214942  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:31.266721  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:31.266780  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:31.301406  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:31.301436  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:31.334075  460401 logs.go:123] Gathering logs for dmesg ...
	I1129 09:02:31.334105  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:02:31.349877  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:31.349908  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:31.386831  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:31.386860  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:31.417056  460401 logs.go:123] Gathering logs for containerd ...
	I1129 09:02:31.417088  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1129 09:02:33.963799  460401 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1129 09:02:33.964273  460401 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1129 09:02:33.964327  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:02:33.964391  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:02:33.994099  460401 cri.go:89] found id: "7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:33.994126  460401 cri.go:89] found id: "1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:33.994132  460401 cri.go:89] found id: ""
	I1129 09:02:33.994142  460401 logs.go:282] 2 containers: [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101]
	I1129 09:02:33.994206  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:34.000403  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:34.005225  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1129 09:02:34.005298  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:02:34.035853  460401 cri.go:89] found id: "f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625"
	I1129 09:02:34.035882  460401 cri.go:89] found id: ""
	I1129 09:02:34.035893  460401 logs.go:282] 1 containers: [f8848e5e1655cac3456277b8b0b7d18c4bad91fab69e433e92c22e3c33ff4625]
	I1129 09:02:34.035957  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:34.041421  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1129 09:02:34.041518  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:02:34.071156  460401 cri.go:89] found id: ""
	I1129 09:02:34.071182  460401 logs.go:282] 0 containers: []
	W1129 09:02:34.071190  460401 logs.go:284] No container was found matching "coredns"
	I1129 09:02:34.071196  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:02:34.071249  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:02:34.102243  460401 cri.go:89] found id: "092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21"
	I1129 09:02:34.102264  460401 cri.go:89] found id: "1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:34.102272  460401 cri.go:89] found id: ""
	I1129 09:02:34.102282  460401 logs.go:282] 2 containers: [092aaf3b340b8d1d8f232e35f0798e461727e7b5609738356ddf194405de6b21 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea]
	I1129 09:02:34.102349  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:34.107266  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:34.111796  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:02:34.111874  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:02:34.140663  460401 cri.go:89] found id: ""
	I1129 09:02:34.140696  460401 logs.go:282] 0 containers: []
	W1129 09:02:34.140706  460401 logs.go:284] No container was found matching "kube-proxy"
	I1129 09:02:34.140714  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:02:34.140789  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:02:34.170642  460401 cri.go:89] found id: "c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:34.170664  460401 cri.go:89] found id: "976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:34.170669  460401 cri.go:89] found id: ""
	I1129 09:02:34.170679  460401 logs.go:282] 2 containers: [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d]
	I1129 09:02:34.170751  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:34.175884  460401 ssh_runner.go:195] Run: which crictl
	I1129 09:02:34.180523  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1129 09:02:34.180602  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:02:34.217118  460401 cri.go:89] found id: ""
	I1129 09:02:34.217147  460401 logs.go:282] 0 containers: []
	W1129 09:02:34.217164  460401 logs.go:284] No container was found matching "kindnet"
	I1129 09:02:34.217176  460401 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:02:34.217244  460401 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:02:34.249681  460401 cri.go:89] found id: ""
	I1129 09:02:34.249711  460401 logs.go:282] 0 containers: []
	W1129 09:02:34.249723  460401 logs.go:284] No container was found matching "storage-provisioner"
	I1129 09:02:34.249750  460401 logs.go:123] Gathering logs for kube-apiserver [7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1] ...
	I1129 09:02:34.249769  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a1e63397d000aac401e89cd0868663c584fe870c2ff14eb45f8a4367d4486b1"
	I1129 09:02:34.286128  460401 logs.go:123] Gathering logs for kube-apiserver [1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101] ...
	I1129 09:02:34.286162  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1fd4280706d61cbcf9886889f6bf4ab1611870c991359ef9ab1d4e394ae55101"
	I1129 09:02:34.323913  460401 logs.go:123] Gathering logs for kube-scheduler [1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea] ...
	I1129 09:02:34.323942  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca70c9760bb31036b5cb191fa8757681cc4ff82a6ef53e7d820ae39d6a325ea"
	I1129 09:02:34.362693  460401 logs.go:123] Gathering logs for kube-controller-manager [c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6] ...
	I1129 09:02:34.362725  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c02fde109f33b7f8e531c53bdd46d0f4d0aa69316a6ccb36f76e8398cb60afd6"
	I1129 09:02:34.402875  460401 logs.go:123] Gathering logs for kube-controller-manager [976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d] ...
	I1129 09:02:34.402903  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 976f1364ed9a03f83e63e1f687ffb6855e93bbdb516287b2e6c38f7984f7f39d"
	I1129 09:02:34.445521  460401 logs.go:123] Gathering logs for container status ...
	I1129 09:02:34.445555  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:02:34.481702  460401 logs.go:123] Gathering logs for kubelet ...
	I1129 09:02:34.481769  460401 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bb9fb2e713bd5       56cc512116c8f       10 seconds ago      Running             busybox                   0                   864d85bb8c066       busybox                                     default
	5edc79817e8ae       52546a367cc9e       16 seconds ago      Running             coredns                   0                   b4bf38030bbc6       coredns-66bc5c9577-nsh8w                    kube-system
	07f73647c6425       6e38f40d628db       16 seconds ago      Running             storage-provisioner       0                   13cafb453dbf6       storage-provisioner                         kube-system
	b3f766ac9f956       409467f978b4a       27 seconds ago      Running             kindnet-cni               0                   13d395db41ff5       kindnet-nscfk                               kube-system
	ff4ea2e8a24f9       fc25172553d79       30 seconds ago      Running             kube-proxy                0                   2dcc97f747328       kube-proxy-96fcg                            kube-system
	f8f46516dbe28       c80c8dbafe7dd       40 seconds ago      Running             kube-controller-manager   0                   d29a4696be107       kube-controller-manager-no-preload-924441   kube-system
	383685f5bf643       c3994bc696102       40 seconds ago      Running             kube-apiserver            0                   ec2efda1f0917       kube-apiserver-no-preload-924441            kube-system
	ab8fc300ad1ef       5f1f5298c888d       40 seconds ago      Running             etcd                      0                   e5b8283f11801       etcd-no-preload-924441                      kube-system
	ee9669cc467e6       7dd6aaa1717ab       40 seconds ago      Running             kube-scheduler            0                   78738700c9426       kube-scheduler-no-preload-924441            kube-system
	
	
	==> containerd <==
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.839889807Z" level=info msg="Container 5edc79817e8ae8a5c88ab5c346145ff5aedbeedf18d092fb82c27a1bb984a93c: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.846487617Z" level=info msg="CreateContainer within sandbox \"13cafb453dbf625e29c8df581ed06b593e1a0c42d541d44342df98eeeff068f9\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"07f73647c64253486a8c6bcde1efc5cf43486a9cb6d0209e28918468208ad47c\""
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.847189321Z" level=info msg="StartContainer for \"07f73647c64253486a8c6bcde1efc5cf43486a9cb6d0209e28918468208ad47c\""
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.848794818Z" level=info msg="connecting to shim 07f73647c64253486a8c6bcde1efc5cf43486a9cb6d0209e28918468208ad47c" address="unix:///run/containerd/s/eddfe8d240380d848bdacc10ce1bae9eedf4156bdf79362fe1df71f1b2f642b1" protocol=ttrpc version=3
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.850361090Z" level=info msg="CreateContainer within sandbox \"b4bf38030bbc62b2a8208ad75f3e67bc668615f9e522d03471806f535c7bb145\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5edc79817e8ae8a5c88ab5c346145ff5aedbeedf18d092fb82c27a1bb984a93c\""
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.851438042Z" level=info msg="StartContainer for \"5edc79817e8ae8a5c88ab5c346145ff5aedbeedf18d092fb82c27a1bb984a93c\""
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.855688040Z" level=info msg="connecting to shim 5edc79817e8ae8a5c88ab5c346145ff5aedbeedf18d092fb82c27a1bb984a93c" address="unix:///run/containerd/s/5ec5d1cddc9bd9035fa8847c5ff116cb50e88f64af67fe9931bde5d7bff42b20" protocol=ttrpc version=3
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.914182629Z" level=info msg="StartContainer for \"07f73647c64253486a8c6bcde1efc5cf43486a9cb6d0209e28918468208ad47c\" returns successfully"
	Nov 29 09:02:20 no-preload-924441 containerd[660]: time="2025-11-29T09:02:20.918459134Z" level=info msg="StartContainer for \"5edc79817e8ae8a5c88ab5c346145ff5aedbeedf18d092fb82c27a1bb984a93c\" returns successfully"
	Nov 29 09:02:24 no-preload-924441 containerd[660]: time="2025-11-29T09:02:24.108622763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:26d445de-fc0e-4bc8-adac-935cd86ee75c,Namespace:default,Attempt:0,}"
	Nov 29 09:02:24 no-preload-924441 containerd[660]: time="2025-11-29T09:02:24.154864619Z" level=info msg="connecting to shim 864d85bb8c06624ebece8af29e58fcc4bb5ace7f92a5183c4a475044dd50d812" address="unix:///run/containerd/s/2e05430291980b4d6bf0132c253a183cf23ed974be9153d3634e00731e9afe21" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:02:24 no-preload-924441 containerd[660]: time="2025-11-29T09:02:24.229979898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:26d445de-fc0e-4bc8-adac-935cd86ee75c,Namespace:default,Attempt:0,} returns sandbox id \"864d85bb8c06624ebece8af29e58fcc4bb5ace7f92a5183c4a475044dd50d812\""
	Nov 29 09:02:24 no-preload-924441 containerd[660]: time="2025-11-29T09:02:24.232344242Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.649879117Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.650753960Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396645"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.651900992Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.653656808Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.654037410Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.421638016s"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.654078474Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.657925505Z" level=info msg="CreateContainer within sandbox \"864d85bb8c06624ebece8af29e58fcc4bb5ace7f92a5183c4a475044dd50d812\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.665693746Z" level=info msg="Container bb9fb2e713bd50cb79f5d0f55e6c71417f53e295c33a00d17b6626aa73517ffa: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.671190023Z" level=info msg="CreateContainer within sandbox \"864d85bb8c06624ebece8af29e58fcc4bb5ace7f92a5183c4a475044dd50d812\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"bb9fb2e713bd50cb79f5d0f55e6c71417f53e295c33a00d17b6626aa73517ffa\""
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.671809727Z" level=info msg="StartContainer for \"bb9fb2e713bd50cb79f5d0f55e6c71417f53e295c33a00d17b6626aa73517ffa\""
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.672789903Z" level=info msg="connecting to shim bb9fb2e713bd50cb79f5d0f55e6c71417f53e295c33a00d17b6626aa73517ffa" address="unix:///run/containerd/s/2e05430291980b4d6bf0132c253a183cf23ed974be9153d3634e00731e9afe21" protocol=ttrpc version=3
	Nov 29 09:02:26 no-preload-924441 containerd[660]: time="2025-11-29T09:02:26.725390423Z" level=info msg="StartContainer for \"bb9fb2e713bd50cb79f5d0f55e6c71417f53e295c33a00d17b6626aa73517ffa\" returns successfully"
	
	
	==> coredns [5edc79817e8ae8a5c88ab5c346145ff5aedbeedf18d092fb82c27a1bb984a93c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39893 - 39917 "HINFO IN 7141279770989079680.5485495748569769835. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.030214653s
	
	
	==> describe nodes <==
	Name:               no-preload-924441
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-924441
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=no-preload-924441
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_02_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:01:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-924441
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:02:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:02:31 +0000   Sat, 29 Nov 2025 09:01:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:02:31 +0000   Sat, 29 Nov 2025 09:01:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:02:31 +0000   Sat, 29 Nov 2025 09:01:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:02:31 +0000   Sat, 29 Nov 2025 09:02:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-924441
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c7ceb567-1fa1-4ee0-a6f1-0da5aaa1749f
	  Boot ID:                    b81dce2f-73d5-4349-b473-aa1210058cb8
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-nsh8w                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-no-preload-924441                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-nscfk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-no-preload-924441             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-no-preload-924441    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-96fcg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-no-preload-924441             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30s   kube-proxy       
	  Normal  Starting                 37s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  37s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  37s   kubelet          Node no-preload-924441 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s   kubelet          Node no-preload-924441 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s   kubelet          Node no-preload-924441 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           33s   node-controller  Node no-preload-924441 event: Registered Node no-preload-924441 in Controller
	  Normal  NodeReady                17s   kubelet          Node no-preload-924441 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 07:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001881] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084003] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.378167] i8042: Warning: Keylock active
	[  +0.012106] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.460417] block sda: the capability attribute has been deprecated.
	[  +0.079627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021012] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.285522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [ab8fc300ad1ef7c9eaf3026a19c133b72463317f50802b7b0376a78df36cd618] <==
	{"level":"warn","ts":"2025-11-29T09:01:57.295570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.304542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.312406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.322990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.331598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.340488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.349394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.365924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.376234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.384804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.399554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.405419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.415385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.423031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.430700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.439764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.449377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.457199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.464780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.479306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.487062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.511769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.518539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:01:57.575545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36582","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T09:01:59.237788Z","caller":"traceutil/trace.go:172","msg":"trace[1580144077] transaction","detail":"{read_only:false; response_revision:80; number_of_response:1; }","duration":"151.091667ms","start":"2025-11-29T09:01:59.086619Z","end":"2025-11-29T09:01:59.237711Z","steps":["trace[1580144077] 'process raft request'  (duration: 64.671777ms)","trace[1580144077] 'compare'  (duration: 86.18899ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:02:37 up  1:45,  0 user,  load average: 2.43, 2.77, 12.32
	Linux no-preload-924441 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b3f766ac9f956727596072f40e76311c158de9cfd27a4fee708265933fe75040] <==
	I1129 09:02:10.078647       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:02:10.078936       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1129 09:02:10.079096       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:02:10.079115       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:02:10.079137       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:02:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:02:10.281933       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:02:10.281951       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:02:10.281959       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:02:10.282224       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:02:10.591456       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:02:10.591490       1 metrics.go:72] Registering metrics
	I1129 09:02:10.591605       1 controller.go:711] "Syncing nftables rules"
	I1129 09:02:20.285846       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:02:20.285905       1 main.go:301] handling current node
	I1129 09:02:30.282268       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:02:30.282320       1 main.go:301] handling current node
	
	
	==> kube-apiserver [383685f5bf6438d0f7ebd7a2a386df6adcee57fe778b3e1c03d8bf71aeff5355] <==
	I1129 09:01:58.133562       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:01:58.133922       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 09:01:58.136307       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:01:58.140436       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:01:58.147718       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:01:58.148746       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:01:58.157520       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:01:59.027165       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:01:59.031139       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:01:59.031159       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:01:59.660873       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:01:59.695141       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:01:59.831242       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:01:59.838074       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1129 09:01:59.839237       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:01:59.842982       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:02:00.038311       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:02:00.973644       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:02:00.984905       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:02:00.992034       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:02:05.789882       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:02:05.992413       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1129 09:02:06.094591       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:02:06.101669       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1129 09:02:33.900359       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:59672: use of closed network connection
	
	
	==> kube-controller-manager [f8f46516dbe2804e7cc2ef18e7ab9f61630c8861fb3068698765425112e7b9fb] <==
	I1129 09:02:04.998079       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 09:02:04.998093       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 09:02:04.998102       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 09:02:05.004334       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-924441" podCIDRs=["10.244.0.0/24"]
	I1129 09:02:05.038460       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:02:05.038484       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:02:05.038496       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:02:05.038520       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:02:05.038550       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:02:05.038589       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:02:05.038646       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:02:05.038661       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:02:05.038774       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1129 09:02:05.038663       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:02:05.040329       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:02:05.040370       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:02:05.040394       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:02:05.041025       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 09:02:05.041537       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:02:05.042726       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:02:05.043836       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:02:05.047458       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:02:05.053705       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:02:05.054799       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 09:02:24.989479       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ff4ea2e8a24f908f96cbd9a880011ea3baa8a548bacc2844c238189376f25019] <==
	I1129 09:02:06.631078       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:02:06.700886       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:02:06.801888       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:02:06.801923       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1129 09:02:06.802035       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:02:06.825814       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:02:06.825893       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:02:06.832862       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:02:06.833334       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:02:06.833374       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:02:06.837208       1 config.go:200] "Starting service config controller"
	I1129 09:02:06.837238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:02:06.837350       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:02:06.837548       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:02:06.837565       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:02:06.837762       1 config.go:309] "Starting node config controller"
	I1129 09:02:06.837783       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:02:06.837789       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:02:06.838082       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:02:06.937437       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:02:06.937883       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:02:06.939103       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ee9669cc467e6d964524ce24464caca9bf8524a5a97a7275b088e9fd74ac089e] <==
	E1129 09:01:58.182270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:01:58.182280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:01:58.182265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:01:58.182367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:01:58.182441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:01:58.182521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:01:58.182556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:01:58.182601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:01:58.182634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:01:58.182680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:01:59.028399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:01:59.095025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:01:59.159692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:01:59.199792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:01:59.235130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:01:59.248432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:01:59.301093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 09:01:59.303111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:01:59.319306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:01:59.335448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:01:59.348897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:01:59.348897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:01:59.402719       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:01:59.428255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1129 09:02:02.175664       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:02:01 no-preload-924441 kubelet[2148]: I1129 09:02:01.864854    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-924441" podStartSLOduration=1.8648393859999999 podStartE2EDuration="1.864839386s" podCreationTimestamp="2025-11-29 09:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:01.864718526 +0000 UTC m=+1.148784794" watchObservedRunningTime="2025-11-29 09:02:01.864839386 +0000 UTC m=+1.148905656"
	Nov 29 09:02:01 no-preload-924441 kubelet[2148]: I1129 09:02:01.884266    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-924441" podStartSLOduration=1.8842352629999999 podStartE2EDuration="1.884235263s" podCreationTimestamp="2025-11-29 09:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:01.875079325 +0000 UTC m=+1.159145596" watchObservedRunningTime="2025-11-29 09:02:01.884235263 +0000 UTC m=+1.168301535"
	Nov 29 09:02:01 no-preload-924441 kubelet[2148]: I1129 09:02:01.897207    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-924441" podStartSLOduration=1.897186102 podStartE2EDuration="1.897186102s" podCreationTimestamp="2025-11-29 09:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:01.884627868 +0000 UTC m=+1.168694138" watchObservedRunningTime="2025-11-29 09:02:01.897186102 +0000 UTC m=+1.181252370"
	Nov 29 09:02:01 no-preload-924441 kubelet[2148]: I1129 09:02:01.897352    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-924441" podStartSLOduration=1.897346712 podStartE2EDuration="1.897346712s" podCreationTimestamp="2025-11-29 09:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:01.896770879 +0000 UTC m=+1.180837150" watchObservedRunningTime="2025-11-29 09:02:01.897346712 +0000 UTC m=+1.181412983"
	Nov 29 09:02:05 no-preload-924441 kubelet[2148]: I1129 09:02:05.036551    2148 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:02:05 no-preload-924441 kubelet[2148]: I1129 09:02:05.037374    2148 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.020008    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9fd8592-2ec4-4da3-a800-b136c118d379-kube-proxy\") pod \"kube-proxy-96fcg\" (UID: \"c9fd8592-2ec4-4da3-a800-b136c118d379\") " pod="kube-system/kube-proxy-96fcg"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.020054    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9fd8592-2ec4-4da3-a800-b136c118d379-xtables-lock\") pod \"kube-proxy-96fcg\" (UID: \"c9fd8592-2ec4-4da3-a800-b136c118d379\") " pod="kube-system/kube-proxy-96fcg"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.020076    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9fd8592-2ec4-4da3-a800-b136c118d379-lib-modules\") pod \"kube-proxy-96fcg\" (UID: \"c9fd8592-2ec4-4da3-a800-b136c118d379\") " pod="kube-system/kube-proxy-96fcg"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.020096    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhxc7\" (UniqueName: \"kubernetes.io/projected/c9fd8592-2ec4-4da3-a800-b136c118d379-kube-api-access-vhxc7\") pod \"kube-proxy-96fcg\" (UID: \"c9fd8592-2ec4-4da3-a800-b136c118d379\") " pod="kube-system/kube-proxy-96fcg"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.120995    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/052c2152-0369-4121-b2fe-25b79a00145a-xtables-lock\") pod \"kindnet-nscfk\" (UID: \"052c2152-0369-4121-b2fe-25b79a00145a\") " pod="kube-system/kindnet-nscfk"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.121077    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh679\" (UniqueName: \"kubernetes.io/projected/052c2152-0369-4121-b2fe-25b79a00145a-kube-api-access-nh679\") pod \"kindnet-nscfk\" (UID: \"052c2152-0369-4121-b2fe-25b79a00145a\") " pod="kube-system/kindnet-nscfk"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.121138    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/052c2152-0369-4121-b2fe-25b79a00145a-lib-modules\") pod \"kindnet-nscfk\" (UID: \"052c2152-0369-4121-b2fe-25b79a00145a\") " pod="kube-system/kindnet-nscfk"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.121165    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/052c2152-0369-4121-b2fe-25b79a00145a-cni-cfg\") pod \"kindnet-nscfk\" (UID: \"052c2152-0369-4121-b2fe-25b79a00145a\") " pod="kube-system/kindnet-nscfk"
	Nov 29 09:02:06 no-preload-924441 kubelet[2148]: I1129 09:02:06.857055    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-96fcg" podStartSLOduration=1.857034866 podStartE2EDuration="1.857034866s" podCreationTimestamp="2025-11-29 09:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:06.856894235 +0000 UTC m=+6.140960503" watchObservedRunningTime="2025-11-29 09:02:06.857034866 +0000 UTC m=+6.141101133"
	Nov 29 09:02:10 no-preload-924441 kubelet[2148]: I1129 09:02:10.863762    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-nscfk" podStartSLOduration=2.927795631 podStartE2EDuration="5.863713725s" podCreationTimestamp="2025-11-29 09:02:05 +0000 UTC" firstStartedPulling="2025-11-29 09:02:06.840294009 +0000 UTC m=+6.124360268" lastFinishedPulling="2025-11-29 09:02:09.776212102 +0000 UTC m=+9.060278362" observedRunningTime="2025-11-29 09:02:10.863649897 +0000 UTC m=+10.147716166" watchObservedRunningTime="2025-11-29 09:02:10.863713725 +0000 UTC m=+10.147779993"
	Nov 29 09:02:20 no-preload-924441 kubelet[2148]: I1129 09:02:20.381108    2148 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:02:20 no-preload-924441 kubelet[2148]: I1129 09:02:20.530227    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf2a8ab9-aaca-4ee6-a390-a02099f693d9-config-volume\") pod \"coredns-66bc5c9577-nsh8w\" (UID: \"bf2a8ab9-aaca-4ee6-a390-a02099f693d9\") " pod="kube-system/coredns-66bc5c9577-nsh8w"
	Nov 29 09:02:20 no-preload-924441 kubelet[2148]: I1129 09:02:20.530273    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/88b64cf8-3233-47bb-be31-6f367a8a1433-tmp\") pod \"storage-provisioner\" (UID: \"88b64cf8-3233-47bb-be31-6f367a8a1433\") " pod="kube-system/storage-provisioner"
	Nov 29 09:02:20 no-preload-924441 kubelet[2148]: I1129 09:02:20.530288    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m46h9\" (UniqueName: \"kubernetes.io/projected/88b64cf8-3233-47bb-be31-6f367a8a1433-kube-api-access-m46h9\") pod \"storage-provisioner\" (UID: \"88b64cf8-3233-47bb-be31-6f367a8a1433\") " pod="kube-system/storage-provisioner"
	Nov 29 09:02:20 no-preload-924441 kubelet[2148]: I1129 09:02:20.530324    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h92f6\" (UniqueName: \"kubernetes.io/projected/bf2a8ab9-aaca-4ee6-a390-a02099f693d9-kube-api-access-h92f6\") pod \"coredns-66bc5c9577-nsh8w\" (UID: \"bf2a8ab9-aaca-4ee6-a390-a02099f693d9\") " pod="kube-system/coredns-66bc5c9577-nsh8w"
	Nov 29 09:02:21 no-preload-924441 kubelet[2148]: I1129 09:02:21.890602    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nsh8w" podStartSLOduration=15.890582022 podStartE2EDuration="15.890582022s" podCreationTimestamp="2025-11-29 09:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:21.890580386 +0000 UTC m=+21.174646654" watchObservedRunningTime="2025-11-29 09:02:21.890582022 +0000 UTC m=+21.174648290"
	Nov 29 09:02:21 no-preload-924441 kubelet[2148]: I1129 09:02:21.908640    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.908618766 podStartE2EDuration="15.908618766s" podCreationTimestamp="2025-11-29 09:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:02:21.9085051 +0000 UTC m=+21.192571368" watchObservedRunningTime="2025-11-29 09:02:21.908618766 +0000 UTC m=+21.192685035"
	Nov 29 09:02:23 no-preload-924441 kubelet[2148]: I1129 09:02:23.848480    2148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5vqt\" (UniqueName: \"kubernetes.io/projected/26d445de-fc0e-4bc8-adac-935cd86ee75c-kube-api-access-v5vqt\") pod \"busybox\" (UID: \"26d445de-fc0e-4bc8-adac-935cd86ee75c\") " pod="default/busybox"
	Nov 29 09:02:26 no-preload-924441 kubelet[2148]: I1129 09:02:26.909451    2148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.486034494 podStartE2EDuration="3.909430653s" podCreationTimestamp="2025-11-29 09:02:23 +0000 UTC" firstStartedPulling="2025-11-29 09:02:24.23159866 +0000 UTC m=+23.515664910" lastFinishedPulling="2025-11-29 09:02:26.654994819 +0000 UTC m=+25.939061069" observedRunningTime="2025-11-29 09:02:26.909209395 +0000 UTC m=+26.193275664" watchObservedRunningTime="2025-11-29 09:02:26.909430653 +0000 UTC m=+26.193496921"
	
	
	==> storage-provisioner [07f73647c64253486a8c6bcde1efc5cf43486a9cb6d0209e28918468208ad47c] <==
	I1129 09:02:20.934912       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:02:20.937126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:20.942580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:02:20.942795       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:02:20.942990       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-924441_3b51ec5f-33b1-4ec9-b892-014858a7836b!
	I1129 09:02:20.943190       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"86151451-b298-4f83-b326-526915f2b329", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-924441_3b51ec5f-33b1-4ec9-b892-014858a7836b became leader
	W1129 09:02:20.948055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:20.953090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:02:21.044015       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-924441_3b51ec5f-33b1-4ec9-b892-014858a7836b!
	W1129 09:02:22.956833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:22.960625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:24.963399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:24.967130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:26.970962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:26.975411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:28.978148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:28.983442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:30.986756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:30.990592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:32.993859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:32.998486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:35.001496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:35.005052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:37.008881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:02:37.012694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-924441 -n no-preload-924441
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-924441 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (14.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-976238 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dc39d248-15e7-409d-be52-e01d5a094726] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [dc39d248-15e7-409d-be52-e01d5a094726] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00521576s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-976238 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-976238
helpers_test.go:243: (dbg) docker inspect embed-certs-976238:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0",
	        "Created": "2025-11-29T09:03:50.315039357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 522084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:03:50.351516837Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0/hosts",
	        "LogPath": "/var/lib/docker/containers/b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0/b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0-json.log",
	        "Name": "/embed-certs-976238",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-976238:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-976238",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0",
	                "LowerDir": "/var/lib/docker/overlay2/0b5c677f90529901bd7d2cf30d320860e575018d3e243e4cebb38c68ed01524c-init/diff:/var/lib/docker/overlay2/eb180691bce18b8d981b2d61ed0962851c615364ed77c18ff66d559424569005/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b5c677f90529901bd7d2cf30d320860e575018d3e243e4cebb38c68ed01524c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b5c677f90529901bd7d2cf30d320860e575018d3e243e4cebb38c68ed01524c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b5c677f90529901bd7d2cf30d320860e575018d3e243e4cebb38c68ed01524c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-976238",
	                "Source": "/var/lib/docker/volumes/embed-certs-976238/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-976238",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-976238",
	                "name.minikube.sigs.k8s.io": "embed-certs-976238",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a9099db3db9f2535f093dec06c8172c0f233f0eb38b3164e489f49d8a5d5278b",
	            "SandboxKey": "/var/run/docker/netns/a9099db3db9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-976238": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c5a65c00c4521ebfb70e7d67e87f09e44fd10abe11b8894838ca9901e14aee8",
	                    "EndpointID": "c028cc55c6b310756a74c1ed9fc3d20b0d2a1935ee982395deaf6596a627d924",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "16:c0:59:19:41:ad",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-976238",
	                        "b9aa399a064a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-976238 -n embed-certs-976238
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-976238 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-976238 logs -n 25: (3.145744701s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ stop    │ -p no-preload-924441 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │ 29 Nov 25 09:02 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-295154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │ 29 Nov 25 09:02 UTC │
	│ start   │ -p old-k8s-version-295154 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │ 29 Nov 25 09:03 UTC │
	│ addons  │ enable dashboard -p no-preload-924441 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │ 29 Nov 25 09:02 UTC │
	│ start   │ -p no-preload-924441 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │ 29 Nov 25 09:03 UTC │
	│ image   │ old-k8s-version-295154 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ pause   │ -p old-k8s-version-295154 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ unpause │ -p old-k8s-version-295154 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p old-k8s-version-295154                                                                                                                                                                                                                           │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p old-k8s-version-295154                                                                                                                                                                                                                           │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ start   │ -p embed-certs-976238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-976238           │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:04 UTC │
	│ image   │ no-preload-924441 image list --format=json                                                                                                                                                                                                          │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ pause   │ -p no-preload-924441 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ unpause │ -p no-preload-924441 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p no-preload-924441                                                                                                                                                                                                                                │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p no-preload-924441                                                                                                                                                                                                                                │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p disable-driver-mounts-286131                                                                                                                                                                                                                     │ disable-driver-mounts-286131 │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ start   │ -p default-k8s-diff-port-357829 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-357829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │                     │
	│ start   │ -p cert-expiration-368536 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-368536       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:04 UTC │
	│ delete  │ -p cert-expiration-368536                                                                                                                                                                                                                           │ cert-expiration-368536       │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p newest-cni-106601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	│ start   │ -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-806701    │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	│ start   │ -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-806701    │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ delete  │ -p kubernetes-upgrade-806701                                                                                                                                                                                                                        │ kubernetes-upgrade-806701    │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p auto-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-770004                  │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:04:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:04:32.732948  535908 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:04:32.733062  535908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:32.733075  535908 out.go:374] Setting ErrFile to fd 2...
	I1129 09:04:32.733080  535908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:32.733355  535908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 09:04:32.733850  535908 out.go:368] Setting JSON to false
	I1129 09:04:32.735165  535908 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6417,"bootTime":1764400656,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:04:32.735220  535908 start.go:143] virtualization: kvm guest
	I1129 09:04:32.737261  535908 out.go:179] * [auto-770004] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:04:32.738414  535908 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:04:32.738455  535908 notify.go:221] Checking for updates...
	I1129 09:04:32.740552  535908 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:04:32.741831  535908 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:04:32.742961  535908 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 09:04:32.743998  535908 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:04:32.745074  535908 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:04:32.746601  535908 config.go:182] Loaded profile config "default-k8s-diff-port-357829": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:32.746687  535908 config.go:182] Loaded profile config "embed-certs-976238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:32.746799  535908 config.go:182] Loaded profile config "newest-cni-106601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:32.746886  535908 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:04:32.771343  535908 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:04:32.771454  535908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:04:32.834282  535908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:04:32.824417383 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:04:32.834380  535908 docker.go:319] overlay module found
	I1129 09:04:32.836031  535908 out.go:179] * Using the docker driver based on user configuration
	I1129 09:04:32.837002  535908 start.go:309] selected driver: docker
	I1129 09:04:32.837015  535908 start.go:927] validating driver "docker" against <nil>
	I1129 09:04:32.837039  535908 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:04:32.837562  535908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:04:32.895888  535908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:04:32.885317909 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:04:32.896061  535908 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:04:32.896270  535908 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:04:32.898062  535908 out.go:179] * Using Docker driver with root privileges
	I1129 09:04:32.899330  535908 cni.go:84] Creating CNI manager for ""
	I1129 09:04:32.899394  535908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:04:32.899405  535908 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:04:32.899498  535908 start.go:353] cluster config:
	{Name:auto-770004 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-770004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:04:32.900658  535908 out.go:179] * Starting "auto-770004" primary control-plane node in "auto-770004" cluster
	I1129 09:04:32.901675  535908 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:04:32.902635  535908 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:04:32.903607  535908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:04:32.903648  535908 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1129 09:04:32.903663  535908 cache.go:65] Caching tarball of preloaded images
	I1129 09:04:32.903700  535908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:04:32.903805  535908 preload.go:238] Found /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1129 09:04:32.903818  535908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1129 09:04:32.903946  535908 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/config.json ...
	I1129 09:04:32.903977  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/config.json: {Name:mk6b7a26c494386e8ab18d3d35aebe1608fed877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:32.926278  535908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:04:32.926297  535908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:04:32.926312  535908 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:04:32.926340  535908 start.go:360] acquireMachinesLock for auto-770004: {Name:mk40429e53b2b4db07988f43af305c63a1e72053 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:04:32.926432  535908 start.go:364] duration metric: took 74.018µs to acquireMachinesLock for "auto-770004"
	I1129 09:04:32.926453  535908 start.go:93] Provisioning new machine with config: &{Name:auto-770004 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-770004 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:04:32.926519  535908 start.go:125] createHost starting for "" (driver="docker")
	W1129 09:04:29.542575  523936 node_ready.go:57] node "default-k8s-diff-port-357829" has "Ready":"False" status (will retry)
	W1129 09:04:32.041860  523936 node_ready.go:57] node "default-k8s-diff-port-357829" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	dd55171d68c77       56cc512116c8f       6 seconds ago       Running             busybox                   0                   6f6e3b02a0ee2       busybox                                      default
	3d08a463e6ad8       52546a367cc9e       11 seconds ago      Running             coredns                   0                   0cb648ccf514a       coredns-66bc5c9577-sz2td                     kube-system
	d54b901347f40       6e38f40d628db       11 seconds ago      Running             storage-provisioner       0                   d674c2e459d17       storage-provisioner                          kube-system
	af29e75e8081c       409467f978b4a       23 seconds ago      Running             kindnet-cni               0                   ac240c4cc6401       kindnet-k5955                                kube-system
	a160c497eeea6       fc25172553d79       23 seconds ago      Running             kube-proxy                0                   fd597c18e12f8       kube-proxy-prv6p                             kube-system
	889bc1e303b19       7dd6aaa1717ab       34 seconds ago      Running             kube-scheduler            0                   3016bab129e62       kube-scheduler-embed-certs-976238            kube-system
	957926049f5ef       c80c8dbafe7dd       34 seconds ago      Running             kube-controller-manager   0                   caadaa0aa4ea9       kube-controller-manager-embed-certs-976238   kube-system
	576a1e0a480b2       c3994bc696102       34 seconds ago      Running             kube-apiserver            0                   25b754b728260       kube-apiserver-embed-certs-976238            kube-system
	0ffa2844b911a       5f1f5298c888d       34 seconds ago      Running             etcd                      0                   ae9e84e8fbfc3       etcd-embed-certs-976238                      kube-system
	
	
	==> containerd <==
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.791299648Z" level=info msg="Container 3d08a463e6ad880d8ddd772750b65fab105a54e78bc396c09ab6e425aa191ce7: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.794505636Z" level=info msg="CreateContainer within sandbox \"d674c2e459d17bd8b042d0a10da3e95b633fec12d1a59419ab7e4c4835297aff\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"d54b901347f406aa3719940a4658b89f3c2a83e1de281b1e3ab1a3b70f37b029\""
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.795068467Z" level=info msg="StartContainer for \"d54b901347f406aa3719940a4658b89f3c2a83e1de281b1e3ab1a3b70f37b029\""
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.796333066Z" level=info msg="connecting to shim d54b901347f406aa3719940a4658b89f3c2a83e1de281b1e3ab1a3b70f37b029" address="unix:///run/containerd/s/ae8a13c5e29679193a47c33a520f0e7db8e30e1b1a014922e88d1f895455e018" protocol=ttrpc version=3
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.799983799Z" level=info msg="CreateContainer within sandbox \"0cb648ccf514a04e7c606e62d3205b84ae3686418ec59196dd7117ceb2628295\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d08a463e6ad880d8ddd772750b65fab105a54e78bc396c09ab6e425aa191ce7\""
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.800880296Z" level=info msg="StartContainer for \"3d08a463e6ad880d8ddd772750b65fab105a54e78bc396c09ab6e425aa191ce7\""
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.802981936Z" level=info msg="connecting to shim 3d08a463e6ad880d8ddd772750b65fab105a54e78bc396c09ab6e425aa191ce7" address="unix:///run/containerd/s/a30f3f9facc588f9011512b86bfd76b3f5f86a6834f6dc341827250e97b9543d" protocol=ttrpc version=3
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.855805854Z" level=info msg="StartContainer for \"d54b901347f406aa3719940a4658b89f3c2a83e1de281b1e3ab1a3b70f37b029\" returns successfully"
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.870348074Z" level=info msg="StartContainer for \"3d08a463e6ad880d8ddd772750b65fab105a54e78bc396c09ab6e425aa191ce7\" returns successfully"
	Nov 29 09:04:26 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:26.751222788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:dc39d248-15e7-409d-be52-e01d5a094726,Namespace:default,Attempt:0,}"
	Nov 29 09:04:26 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:26.797976137Z" level=info msg="connecting to shim 6f6e3b02a0ee26da7854b922b28b5028dcc8c006edfb7e2d43007ce3a402a1b9" address="unix:///run/containerd/s/d246f54ee907b99e916171a4462a5c8d6bdf4a662d767261b183ed2853248f8f" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:04:26 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:26.883803045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:dc39d248-15e7-409d-be52-e01d5a094726,Namespace:default,Attempt:0,} returns sandbox id \"6f6e3b02a0ee26da7854b922b28b5028dcc8c006edfb7e2d43007ce3a402a1b9\""
	Nov 29 09:04:26 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:26.886676269Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.142242797Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.142943604Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.143977541Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.145860824Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.146249978Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.25944198s"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.146294301Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.150361265Z" level=info msg="CreateContainer within sandbox \"6f6e3b02a0ee26da7854b922b28b5028dcc8c006edfb7e2d43007ce3a402a1b9\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.156677687Z" level=info msg="Container dd55171d68c77a1b046242badc74dcba50315eb6badf827a4a8510b73cf77d5d: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.162210510Z" level=info msg="CreateContainer within sandbox \"6f6e3b02a0ee26da7854b922b28b5028dcc8c006edfb7e2d43007ce3a402a1b9\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"dd55171d68c77a1b046242badc74dcba50315eb6badf827a4a8510b73cf77d5d\""
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.162809777Z" level=info msg="StartContainer for \"dd55171d68c77a1b046242badc74dcba50315eb6badf827a4a8510b73cf77d5d\""
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.163745993Z" level=info msg="connecting to shim dd55171d68c77a1b046242badc74dcba50315eb6badf827a4a8510b73cf77d5d" address="unix:///run/containerd/s/d246f54ee907b99e916171a4462a5c8d6bdf4a662d767261b183ed2853248f8f" protocol=ttrpc version=3
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.217141529Z" level=info msg="StartContainer for \"dd55171d68c77a1b046242badc74dcba50315eb6badf827a4a8510b73cf77d5d\" returns successfully"
	
	
	==> coredns [3d08a463e6ad880d8ddd772750b65fab105a54e78bc396c09ab6e425aa191ce7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43947 - 5403 "HINFO IN 6459388017159108501.114435138945518211. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.034969011s
	
	
	==> describe nodes <==
	Name:               embed-certs-976238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-976238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=embed-certs-976238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_04_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:04:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-976238
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:04:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:04:23 +0000   Sat, 29 Nov 2025 09:04:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:04:23 +0000   Sat, 29 Nov 2025 09:04:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:04:23 +0000   Sat, 29 Nov 2025 09:04:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:04:23 +0000   Sat, 29 Nov 2025 09:04:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-976238
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                d886b6c4-ae69-4d2d-bf27-37d976e31f50
	  Boot ID:                    b81dce2f-73d5-4349-b473-aa1210058cb8
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-sz2td                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-976238                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-k5955                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-976238             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-976238    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-prv6p                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-976238             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node embed-certs-976238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node embed-certs-976238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x7 over 35s)  kubelet          Node embed-certs-976238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node embed-certs-976238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node embed-certs-976238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node embed-certs-976238 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node embed-certs-976238 event: Registered Node embed-certs-976238 in Controller
	  Normal  NodeReady                12s                kubelet          Node embed-certs-976238 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 07:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001881] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084003] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.378167] i8042: Warning: Keylock active
	[  +0.012106] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.460417] block sda: the capability attribute has been deprecated.
	[  +0.079627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021012] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.285522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [0ffa2844b911a1c5b0c8ac6dbe8eee086c24ffbc4c864cec6d90e875a1a16e8a] <==
	{"level":"info","ts":"2025-11-29T09:04:11.471882Z","caller":"traceutil/trace.go:172","msg":"trace[645552265] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"202.858269ms","start":"2025-11-29T09:04:11.269010Z","end":"2025-11-29T09:04:11.471869Z","steps":["trace[645552265] 'process raft request'  (duration: 202.62109ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.472033Z","caller":"traceutil/trace.go:172","msg":"trace[848082240] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"202.912685ms","start":"2025-11-29T09:04:11.269106Z","end":"2025-11-29T09:04:11.472019Z","steps":["trace[848082240] 'process raft request'  (duration: 202.56652ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.471944Z","caller":"traceutil/trace.go:172","msg":"trace[2087248779] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"205.879787ms","start":"2025-11-29T09:04:11.266053Z","end":"2025-11-29T09:04:11.471933Z","steps":["trace[2087248779] 'process raft request'  (duration: 205.05413ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.472285Z","caller":"traceutil/trace.go:172","msg":"trace[328840537] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"194.452275ms","start":"2025-11-29T09:04:11.277821Z","end":"2025-11-29T09:04:11.472273Z","steps":["trace[328840537] 'process raft request'  (duration: 193.947835ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.472321Z","caller":"traceutil/trace.go:172","msg":"trace[263126455] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"202.362956ms","start":"2025-11-29T09:04:11.269948Z","end":"2025-11-29T09:04:11.472311Z","steps":["trace[263126455] 'process raft request'  (duration: 201.758522ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.472383Z","caller":"traceutil/trace.go:172","msg":"trace[1479975396] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"203.388231ms","start":"2025-11-29T09:04:11.268987Z","end":"2025-11-29T09:04:11.472376Z","steps":["trace[1479975396] 'process raft request'  (duration: 202.589992ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.472474Z","caller":"traceutil/trace.go:172","msg":"trace[991560380] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"203.902773ms","start":"2025-11-29T09:04:11.268564Z","end":"2025-11-29T09:04:11.472467Z","steps":["trace[991560380] 'process raft request'  (duration: 202.950005ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.723956Z","caller":"traceutil/trace.go:172","msg":"trace[472951656] linearizableReadLoop","detail":"{readStateIndex:388; appliedIndex:388; }","duration":"162.352265ms","start":"2025-11-29T09:04:11.561580Z","end":"2025-11-29T09:04:11.723933Z","steps":["trace[472951656] 'read index received'  (duration: 162.344086ms)","trace[472951656] 'applied index is now lower than readState.Index'  (duration: 7.096µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:04:11.724495Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"234.102918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2025-11-29T09:04:11.725497Z","caller":"traceutil/trace.go:172","msg":"trace[1634254383] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:377; }","duration":"235.127914ms","start":"2025-11-29T09:04:11.490355Z","end":"2025-11-29T09:04:11.725483Z","steps":["trace[1634254383] 'agreement among raft nodes before linearized reading'  (duration: 233.726265ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.724924Z","caller":"traceutil/trace.go:172","msg":"trace[176791064] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"228.817642ms","start":"2025-11-29T09:04:11.496093Z","end":"2025-11-29T09:04:11.724911Z","steps":["trace[176791064] 'process raft request'  (duration: 228.771631ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.724959Z","caller":"traceutil/trace.go:172","msg":"trace[1122891804] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"237.103011ms","start":"2025-11-29T09:04:11.487851Z","end":"2025-11-29T09:04:11.724954Z","steps":["trace[1122891804] 'process raft request'  (duration: 236.287787ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.725125Z","caller":"traceutil/trace.go:172","msg":"trace[1278496739] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"235.134079ms","start":"2025-11-29T09:04:11.489981Z","end":"2025-11-29T09:04:11.725115Z","steps":["trace[1278496739] 'process raft request'  (duration: 234.763037ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.725152Z","caller":"traceutil/trace.go:172","msg":"trace[2081360934] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"233.055169ms","start":"2025-11-29T09:04:11.492091Z","end":"2025-11-29T09:04:11.725146Z","steps":["trace[2081360934] 'process raft request'  (duration: 232.74166ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.731636Z","caller":"traceutil/trace.go:172","msg":"trace[137997452] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"166.17819ms","start":"2025-11-29T09:04:11.565415Z","end":"2025-11-29T09:04:11.731593Z","steps":["trace[137997452] 'process raft request'  (duration: 165.751628ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.732073Z","caller":"traceutil/trace.go:172","msg":"trace[979106742] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"161.052435ms","start":"2025-11-29T09:04:11.571010Z","end":"2025-11-29T09:04:11.732063Z","steps":["trace[979106742] 'process raft request'  (duration: 160.25485ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:04:11.736991Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.725094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2025-11-29T09:04:11.737544Z","caller":"traceutil/trace.go:172","msg":"trace[2133078243] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:382; }","duration":"148.287197ms","start":"2025-11-29T09:04:11.589242Z","end":"2025-11-29T09:04:11.737529Z","steps":["trace[2133078243] 'agreement among raft nodes before linearized reading'  (duration: 142.709646ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.894674Z","caller":"traceutil/trace.go:172","msg":"trace[706579059] linearizableReadLoop","detail":"{readStateIndex:396; appliedIndex:396; }","duration":"139.02579ms","start":"2025-11-29T09:04:11.755627Z","end":"2025-11-29T09:04:11.894653Z","steps":["trace[706579059] 'read index received'  (duration: 139.019091ms)","trace[706579059] 'applied index is now lower than readState.Index'  (duration: 5.568µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:04:11.956673Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"201.013279ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T09:04:11.956817Z","caller":"traceutil/trace.go:172","msg":"trace[1785179962] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:385; }","duration":"201.170542ms","start":"2025-11-29T09:04:11.755623Z","end":"2025-11-29T09:04:11.956794Z","steps":["trace[1785179962] 'agreement among raft nodes before linearized reading'  (duration: 139.257564ms)","trace[1785179962] 'range keys from in-memory index tree'  (duration: 61.707341ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-29T09:04:11.956827Z","caller":"traceutil/trace.go:172","msg":"trace[598877949] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"203.609347ms","start":"2025-11-29T09:04:11.753202Z","end":"2025-11-29T09:04:11.956812Z","steps":["trace[598877949] 'process raft request'  (duration: 141.835989ms)","trace[598877949] 'compare'  (duration: 61.64539ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:04:11.960140Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.447304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4299"}
	{"level":"info","ts":"2025-11-29T09:04:11.960200Z","caller":"traceutil/trace.go:172","msg":"trace[1605401434] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:386; }","duration":"194.518224ms","start":"2025-11-29T09:04:11.765672Z","end":"2025-11-29T09:04:11.960190Z","steps":["trace[1605401434] 'agreement among raft nodes before linearized reading'  (duration: 194.344334ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.960620Z","caller":"traceutil/trace.go:172","msg":"trace[1647820491] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"200.53021ms","start":"2025-11-29T09:04:11.760068Z","end":"2025-11-29T09:04:11.960598Z","steps":["trace[1647820491] 'process raft request'  (duration: 200.273546ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:04:35 up  1:46,  0 user,  load average: 5.07, 3.55, 11.44
	Linux embed-certs-976238 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [af29e75e8081ce6ac2e6ffb68d826e85fb57d344b529b867ca7cc0d8f6f6194c] <==
	I1129 09:04:13.025714       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:04:13.026102       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:04:13.026249       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:04:13.026263       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:04:13.026286       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:04:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:04:13.227629       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:04:13.280129       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:04:13.280147       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:04:13.280269       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:04:13.580271       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:04:13.580302       1 metrics.go:72] Registering metrics
	I1129 09:04:13.580379       1 controller.go:711] "Syncing nftables rules"
	I1129 09:04:23.232344       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:04:23.232405       1 main.go:301] handling current node
	I1129 09:04:33.228192       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:04:33.228242       1 main.go:301] handling current node
	
	
	==> kube-apiserver [576a1e0a480b29af60f77b33edb8f6d693f5b9f7ff8b1eb26756e23a22dde168] <==
	I1129 09:04:02.783928       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 09:04:02.785121       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:04:02.795489       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:04:02.800267       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:04:02.800426       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:02.801918       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:02.834704       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:04:03.688928       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:04:03.693085       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:04:03.693105       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:04:04.225905       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:04:04.265639       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:04:04.394718       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:04:04.401154       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1129 09:04:04.402224       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:04:04.406769       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:04:04.747405       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:04:05.399324       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:04:05.411308       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:04:05.418370       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:04:10.066907       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:10.195951       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:10.637981       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:04:10.894987       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1129 09:04:34.591894       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:50186: use of closed network connection
	
	
	==> kube-controller-manager [957926049f5ef9ca6313979ec1f6a3ba063873beb80384c68e46faf5c8d293c8] <==
	I1129 09:04:09.748538       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:04:09.751003       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:04:09.751038       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 09:04:09.751066       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 09:04:09.752444       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1129 09:04:09.752492       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 09:04:09.752514       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 09:04:09.752518       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 09:04:09.752529       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 09:04:09.756297       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:04:09.756707       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 09:04:09.757979       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:04:09.758048       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:04:09.758067       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:04:09.759403       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:04:09.767643       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:04:09.773023       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:04:09.777406       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-976238" podCIDRs=["10.244.0.0/24"]
	I1129 09:04:09.779931       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:04:09.785321       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:04:09.791721       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 09:04:09.795933       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:04:09.798145       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:04:09.806816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:04:24.703781       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a160c497eeea6ae4a856d37924cbee711fad64a776f2b530408e36774150397e] <==
	I1129 09:04:12.372059       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:04:12.465813       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:04:12.566818       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:04:12.566996       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 09:04:12.567169       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:04:12.661415       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:04:12.661531       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:04:12.669543       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:04:12.670582       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:04:12.670620       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:04:12.673251       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:04:12.673274       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:04:12.673312       1 config.go:200] "Starting service config controller"
	I1129 09:04:12.673318       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:04:12.673679       1 config.go:309] "Starting node config controller"
	I1129 09:04:12.673689       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:04:12.673696       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:04:12.673882       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:04:12.673893       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:04:12.773383       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:04:12.773436       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:04:12.776291       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [889bc1e303b190e5b435a081a4fa45511ba81f5faec06e2bc70c3429f3972219] <==
	E1129 09:04:02.754234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:04:02.754270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:04:02.754369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:04:02.754508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:04:02.754606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:04:02.754692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:04:02.754756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:04:02.755246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:04:02.755287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:04:02.755807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:04:02.755834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:04:02.755813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:04:02.755986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:04:02.756043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:04:03.587723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:04:03.689248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:04:03.696333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:04:03.711520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 09:04:03.728165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:04:03.823074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:04:03.867774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:04:03.923999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:04:04.019857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:04:04.047220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1129 09:04:05.647642       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:04:06 embed-certs-976238 kubelet[1438]: E1129 09:04:06.276543    1438 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-976238\" already exists" pod="kube-system/kube-apiserver-embed-certs-976238"
	Nov 29 09:04:06 embed-certs-976238 kubelet[1438]: I1129 09:04:06.301641    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-976238" podStartSLOduration=1.301616588 podStartE2EDuration="1.301616588s" podCreationTimestamp="2025-11-29 09:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:06.288772055 +0000 UTC m=+1.131555275" watchObservedRunningTime="2025-11-29 09:04:06.301616588 +0000 UTC m=+1.144399799"
	Nov 29 09:04:06 embed-certs-976238 kubelet[1438]: I1129 09:04:06.302036    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-976238" podStartSLOduration=1.302020055 podStartE2EDuration="1.302020055s" podCreationTimestamp="2025-11-29 09:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:06.301568982 +0000 UTC m=+1.144352197" watchObservedRunningTime="2025-11-29 09:04:06.302020055 +0000 UTC m=+1.144803268"
	Nov 29 09:04:06 embed-certs-976238 kubelet[1438]: I1129 09:04:06.324670    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-976238" podStartSLOduration=1.324649539 podStartE2EDuration="1.324649539s" podCreationTimestamp="2025-11-29 09:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:06.311916558 +0000 UTC m=+1.154699794" watchObservedRunningTime="2025-11-29 09:04:06.324649539 +0000 UTC m=+1.167432763"
	Nov 29 09:04:06 embed-certs-976238 kubelet[1438]: I1129 09:04:06.324808    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-976238" podStartSLOduration=1.324800879 podStartE2EDuration="1.324800879s" podCreationTimestamp="2025-11-29 09:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:06.324789028 +0000 UTC m=+1.167572237" watchObservedRunningTime="2025-11-29 09:04:06.324800879 +0000 UTC m=+1.167584093"
	Nov 29 09:04:09 embed-certs-976238 kubelet[1438]: I1129 09:04:09.810526    1438 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:04:09 embed-certs-976238 kubelet[1438]: I1129 09:04:09.811264    1438 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.485023    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd-cni-cfg\") pod \"kindnet-k5955\" (UID: \"709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd\") " pod="kube-system/kindnet-k5955"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.485545    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd-xtables-lock\") pod \"kindnet-k5955\" (UID: \"709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd\") " pod="kube-system/kindnet-k5955"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.486188    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd-lib-modules\") pod \"kindnet-k5955\" (UID: \"709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd\") " pod="kube-system/kindnet-k5955"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.486327    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqnll\" (UniqueName: \"kubernetes.io/projected/709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd-kube-api-access-sqnll\") pod \"kindnet-k5955\" (UID: \"709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd\") " pod="kube-system/kindnet-k5955"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.788357    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/193bf7f7-0d38-4430-b64e-1c2c3b617d08-kube-proxy\") pod \"kube-proxy-prv6p\" (UID: \"193bf7f7-0d38-4430-b64e-1c2c3b617d08\") " pod="kube-system/kube-proxy-prv6p"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.789165    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/193bf7f7-0d38-4430-b64e-1c2c3b617d08-lib-modules\") pod \"kube-proxy-prv6p\" (UID: \"193bf7f7-0d38-4430-b64e-1c2c3b617d08\") " pod="kube-system/kube-proxy-prv6p"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.789210    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5t58\" (UniqueName: \"kubernetes.io/projected/193bf7f7-0d38-4430-b64e-1c2c3b617d08-kube-api-access-f5t58\") pod \"kube-proxy-prv6p\" (UID: \"193bf7f7-0d38-4430-b64e-1c2c3b617d08\") " pod="kube-system/kube-proxy-prv6p"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.789246    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/193bf7f7-0d38-4430-b64e-1c2c3b617d08-xtables-lock\") pod \"kube-proxy-prv6p\" (UID: \"193bf7f7-0d38-4430-b64e-1c2c3b617d08\") " pod="kube-system/kube-proxy-prv6p"
	Nov 29 09:04:13 embed-certs-976238 kubelet[1438]: I1129 09:04:13.300963    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-prv6p" podStartSLOduration=2.300943045 podStartE2EDuration="2.300943045s" podCreationTimestamp="2025-11-29 09:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:13.30065997 +0000 UTC m=+8.143443193" watchObservedRunningTime="2025-11-29 09:04:13.300943045 +0000 UTC m=+8.143726259"
	Nov 29 09:04:13 embed-certs-976238 kubelet[1438]: I1129 09:04:13.321602    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-k5955" podStartSLOduration=2.321580902 podStartE2EDuration="2.321580902s" podCreationTimestamp="2025-11-29 09:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:13.321175977 +0000 UTC m=+8.163959191" watchObservedRunningTime="2025-11-29 09:04:13.321580902 +0000 UTC m=+8.164364113"
	Nov 29 09:04:23 embed-certs-976238 kubelet[1438]: I1129 09:04:23.274173    1438 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:04:23 embed-certs-976238 kubelet[1438]: I1129 09:04:23.377463    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4gkv\" (UniqueName: \"kubernetes.io/projected/c34076a4-1198-4240-b8f1-28d44891e684-kube-api-access-d4gkv\") pod \"coredns-66bc5c9577-sz2td\" (UID: \"c34076a4-1198-4240-b8f1-28d44891e684\") " pod="kube-system/coredns-66bc5c9577-sz2td"
	Nov 29 09:04:23 embed-certs-976238 kubelet[1438]: I1129 09:04:23.377546    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c0d65141-7c62-422c-9cb7-66594118ec4e-tmp\") pod \"storage-provisioner\" (UID: \"c0d65141-7c62-422c-9cb7-66594118ec4e\") " pod="kube-system/storage-provisioner"
	Nov 29 09:04:23 embed-certs-976238 kubelet[1438]: I1129 09:04:23.377587    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56czn\" (UniqueName: \"kubernetes.io/projected/c0d65141-7c62-422c-9cb7-66594118ec4e-kube-api-access-56czn\") pod \"storage-provisioner\" (UID: \"c0d65141-7c62-422c-9cb7-66594118ec4e\") " pod="kube-system/storage-provisioner"
	Nov 29 09:04:23 embed-certs-976238 kubelet[1438]: I1129 09:04:23.377617    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c34076a4-1198-4240-b8f1-28d44891e684-config-volume\") pod \"coredns-66bc5c9577-sz2td\" (UID: \"c34076a4-1198-4240-b8f1-28d44891e684\") " pod="kube-system/coredns-66bc5c9577-sz2td"
	Nov 29 09:04:24 embed-certs-976238 kubelet[1438]: I1129 09:04:24.336831    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sz2td" podStartSLOduration=13.336808932 podStartE2EDuration="13.336808932s" podCreationTimestamp="2025-11-29 09:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:24.336307843 +0000 UTC m=+19.179091072" watchObservedRunningTime="2025-11-29 09:04:24.336808932 +0000 UTC m=+19.179592146"
	Nov 29 09:04:24 embed-certs-976238 kubelet[1438]: I1129 09:04:24.347858    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.347836325 podStartE2EDuration="12.347836325s" podCreationTimestamp="2025-11-29 09:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:24.347315126 +0000 UTC m=+19.190098340" watchObservedRunningTime="2025-11-29 09:04:24.347836325 +0000 UTC m=+19.190619540"
	Nov 29 09:04:26 embed-certs-976238 kubelet[1438]: I1129 09:04:26.503877    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56hjj\" (UniqueName: \"kubernetes.io/projected/dc39d248-15e7-409d-be52-e01d5a094726-kube-api-access-56hjj\") pod \"busybox\" (UID: \"dc39d248-15e7-409d-be52-e01d5a094726\") " pod="default/busybox"
	
	
	==> storage-provisioner [d54b901347f406aa3719940a4658b89f3c2a83e1de281b1e3ab1a3b70f37b029] <==
	I1129 09:04:23.873895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:04:23.884943       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:04:23.884995       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:04:23.887628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:23.893040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:04:23.893234       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:04:23.893437       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"479cd1c1-98ff-4960-9b6e-9cc6ae8a115c", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-976238_32762e67-079d-438e-8eb0-cd5ee2b5ce97 became leader
	I1129 09:04:23.893642       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-976238_32762e67-079d-438e-8eb0-cd5ee2b5ce97!
	W1129 09:04:23.905007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:23.910647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:04:23.994398       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-976238_32762e67-079d-438e-8eb0-cd5ee2b5ce97!
	W1129 09:04:25.913968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:25.920244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:27.924806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:27.929323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:29.932947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:29.937949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:31.940844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:31.945034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:33.948541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:33.956403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:35.960512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:35.971423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-976238 -n embed-certs-976238
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-976238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-976238
helpers_test.go:243: (dbg) docker inspect embed-certs-976238:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0",
	        "Created": "2025-11-29T09:03:50.315039357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 522084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:03:50.351516837Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0/hostname",
	        "HostsPath": "/var/lib/docker/containers/b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0/hosts",
	        "LogPath": "/var/lib/docker/containers/b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0/b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0-json.log",
	        "Name": "/embed-certs-976238",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-976238:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-976238",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b9aa399a064ac9613732c61e159fce0d41d08c5badd0810f72f8168d34862ef0",
	                "LowerDir": "/var/lib/docker/overlay2/0b5c677f90529901bd7d2cf30d320860e575018d3e243e4cebb38c68ed01524c-init/diff:/var/lib/docker/overlay2/eb180691bce18b8d981b2d61ed0962851c615364ed77c18ff66d559424569005/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b5c677f90529901bd7d2cf30d320860e575018d3e243e4cebb38c68ed01524c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b5c677f90529901bd7d2cf30d320860e575018d3e243e4cebb38c68ed01524c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b5c677f90529901bd7d2cf30d320860e575018d3e243e4cebb38c68ed01524c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-976238",
	                "Source": "/var/lib/docker/volumes/embed-certs-976238/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-976238",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-976238",
	                "name.minikube.sigs.k8s.io": "embed-certs-976238",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a9099db3db9f2535f093dec06c8172c0f233f0eb38b3164e489f49d8a5d5278b",
	            "SandboxKey": "/var/run/docker/netns/a9099db3db9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-976238": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0c5a65c00c4521ebfb70e7d67e87f09e44fd10abe11b8894838ca9901e14aee8",
	                    "EndpointID": "c028cc55c6b310756a74c1ed9fc3d20b0d2a1935ee982395deaf6596a627d924",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "16:c0:59:19:41:ad",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-976238",
	                        "b9aa399a064a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-976238 -n embed-certs-976238
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-976238 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-976238 logs -n 25: (1.163238721s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p old-k8s-version-295154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │ 29 Nov 25 09:02 UTC │
	│ start   │ -p old-k8s-version-295154 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │ 29 Nov 25 09:03 UTC │
	│ addons  │ enable dashboard -p no-preload-924441 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │ 29 Nov 25 09:02 UTC │
	│ start   │ -p no-preload-924441 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:02 UTC │ 29 Nov 25 09:03 UTC │
	│ image   │ old-k8s-version-295154 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ pause   │ -p old-k8s-version-295154 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ unpause │ -p old-k8s-version-295154 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p old-k8s-version-295154                                                                                                                                                                                                                           │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p old-k8s-version-295154                                                                                                                                                                                                                           │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ start   │ -p embed-certs-976238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-976238           │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:04 UTC │
	│ image   │ no-preload-924441 image list --format=json                                                                                                                                                                                                          │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ pause   │ -p no-preload-924441 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ unpause │ -p no-preload-924441 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p no-preload-924441                                                                                                                                                                                                                                │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p no-preload-924441                                                                                                                                                                                                                                │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p disable-driver-mounts-286131                                                                                                                                                                                                                     │ disable-driver-mounts-286131 │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ start   │ -p default-k8s-diff-port-357829 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-357829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p cert-expiration-368536 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-368536       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:04 UTC │
	│ delete  │ -p cert-expiration-368536                                                                                                                                                                                                                           │ cert-expiration-368536       │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p newest-cni-106601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-806701    │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	│ start   │ -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-806701    │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ delete  │ -p kubernetes-upgrade-806701                                                                                                                                                                                                                        │ kubernetes-upgrade-806701    │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p auto-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-770004                  │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-106601 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:04:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:04:32.732948  535908 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:04:32.733062  535908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:32.733075  535908 out.go:374] Setting ErrFile to fd 2...
	I1129 09:04:32.733080  535908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:32.733355  535908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 09:04:32.733850  535908 out.go:368] Setting JSON to false
	I1129 09:04:32.735165  535908 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6417,"bootTime":1764400656,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:04:32.735220  535908 start.go:143] virtualization: kvm guest
	I1129 09:04:32.737261  535908 out.go:179] * [auto-770004] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:04:32.738414  535908 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:04:32.738455  535908 notify.go:221] Checking for updates...
	I1129 09:04:32.740552  535908 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:04:32.741831  535908 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:04:32.742961  535908 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 09:04:32.743998  535908 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:04:32.745074  535908 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:04:32.746601  535908 config.go:182] Loaded profile config "default-k8s-diff-port-357829": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:32.746687  535908 config.go:182] Loaded profile config "embed-certs-976238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:32.746799  535908 config.go:182] Loaded profile config "newest-cni-106601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:32.746886  535908 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:04:32.771343  535908 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:04:32.771454  535908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:04:32.834282  535908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:04:32.824417383 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:04:32.834380  535908 docker.go:319] overlay module found
	I1129 09:04:32.836031  535908 out.go:179] * Using the docker driver based on user configuration
	I1129 09:04:32.837002  535908 start.go:309] selected driver: docker
	I1129 09:04:32.837015  535908 start.go:927] validating driver "docker" against <nil>
	I1129 09:04:32.837039  535908 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:04:32.837562  535908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:04:32.895888  535908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:04:32.885317909 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:04:32.896061  535908 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:04:32.896270  535908 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:04:32.898062  535908 out.go:179] * Using Docker driver with root privileges
	I1129 09:04:32.899330  535908 cni.go:84] Creating CNI manager for ""
	I1129 09:04:32.899394  535908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:04:32.899405  535908 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 09:04:32.899498  535908 start.go:353] cluster config:
	{Name:auto-770004 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-770004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:04:32.900658  535908 out.go:179] * Starting "auto-770004" primary control-plane node in "auto-770004" cluster
	I1129 09:04:32.901675  535908 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:04:32.902635  535908 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:04:32.903607  535908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:04:32.903648  535908 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1129 09:04:32.903663  535908 cache.go:65] Caching tarball of preloaded images
	I1129 09:04:32.903700  535908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:04:32.903805  535908 preload.go:238] Found /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1129 09:04:32.903818  535908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1129 09:04:32.903946  535908 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/config.json ...
	I1129 09:04:32.903977  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/config.json: {Name:mk6b7a26c494386e8ab18d3d35aebe1608fed877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:32.926278  535908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:04:32.926297  535908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:04:32.926312  535908 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:04:32.926340  535908 start.go:360] acquireMachinesLock for auto-770004: {Name:mk40429e53b2b4db07988f43af305c63a1e72053 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:04:32.926432  535908 start.go:364] duration metric: took 74.018µs to acquireMachinesLock for "auto-770004"
	I1129 09:04:32.926453  535908 start.go:93] Provisioning new machine with config: &{Name:auto-770004 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-770004 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:04:32.926519  535908 start.go:125] createHost starting for "" (driver="docker")
	W1129 09:04:29.542575  523936 node_ready.go:57] node "default-k8s-diff-port-357829" has "Ready":"False" status (will retry)
	W1129 09:04:32.041860  523936 node_ready.go:57] node "default-k8s-diff-port-357829" has "Ready":"False" status (will retry)
	I1129 09:04:32.057221  528649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1129 09:04:32.061807  528649 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1129 09:04:32.061829  528649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1129 09:04:32.075711  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1129 09:04:32.307201  528649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:04:32.307358  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:04:32.307382  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-106601 minikube.k8s.io/updated_at=2025_11_29T09_04_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=newest-cni-106601 minikube.k8s.io/primary=true
	I1129 09:04:32.317612  528649 ops.go:34] apiserver oom_adj: -16
	I1129 09:04:32.398017  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:04:32.898664  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:04:33.398982  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:04:33.898401  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:04:34.398947  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:04:34.898717  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:04:35.398941  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:04:35.898157  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:04:36.398962  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1129 09:04:34.042432  523936 node_ready.go:57] node "default-k8s-diff-port-357829" has "Ready":"False" status (will retry)
	I1129 09:04:34.541765  523936 node_ready.go:49] node "default-k8s-diff-port-357829" is "Ready"
	I1129 09:04:34.541804  523936 node_ready.go:38] duration metric: took 11.503160551s for node "default-k8s-diff-port-357829" to be "Ready" ...
	I1129 09:04:34.541825  523936 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:04:34.541890  523936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:04:34.557820  523936 api_server.go:72] duration metric: took 11.942091831s to wait for apiserver process to appear ...
	I1129 09:04:34.557849  523936 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:04:34.557872  523936 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1129 09:04:34.562922  523936 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1129 09:04:34.564144  523936 api_server.go:141] control plane version: v1.34.1
	I1129 09:04:34.564178  523936 api_server.go:131] duration metric: took 6.319403ms to wait for apiserver health ...
	I1129 09:04:34.564191  523936 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:04:34.567841  523936 system_pods.go:59] 8 kube-system pods found
	I1129 09:04:34.567879  523936 system_pods.go:61] "coredns-66bc5c9577-d7vmg" [4ebe88f4-4c20-4523-8642-f54615c1f605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:04:34.567889  523936 system_pods.go:61] "etcd-default-k8s-diff-port-357829" [6c4c6f16-3f64-4497-b97f-2a671753712e] Running
	I1129 09:04:34.567901  523936 system_pods.go:61] "kindnet-g5whk" [5563c069-5b20-4835-941c-48eb3b04c051] Running
	I1129 09:04:34.567907  523936 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-357829" [63ae458a-4c90-430f-abf2-89d10486fa11] Running
	I1129 09:04:34.567916  523936 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-357829" [25e4471e-c878-4e47-b809-119821095e31] Running
	I1129 09:04:34.567924  523936 system_pods.go:61] "kube-proxy-v9bbz" [6a515c70-840f-41c2-b1e4-6de13b23e5f3] Running
	I1129 09:04:34.567935  523936 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-357829" [e32668d0-29c4-4ce1-aa19-dc8722f0eae3] Running
	I1129 09:04:34.567944  523936 system_pods.go:61] "storage-provisioner" [d9aa47c6-1005-4a91-a986-819f21c0cfda] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:04:34.567952  523936 system_pods.go:74] duration metric: took 3.753244ms to wait for pod list to return data ...
	I1129 09:04:34.567964  523936 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:04:34.570613  523936 default_sa.go:45] found service account: "default"
	I1129 09:04:34.570637  523936 default_sa.go:55] duration metric: took 2.666263ms for default service account to be created ...
	I1129 09:04:34.570646  523936 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:04:34.573276  523936 system_pods.go:86] 8 kube-system pods found
	I1129 09:04:34.573303  523936 system_pods.go:89] "coredns-66bc5c9577-d7vmg" [4ebe88f4-4c20-4523-8642-f54615c1f605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:04:34.573308  523936 system_pods.go:89] "etcd-default-k8s-diff-port-357829" [6c4c6f16-3f64-4497-b97f-2a671753712e] Running
	I1129 09:04:34.573315  523936 system_pods.go:89] "kindnet-g5whk" [5563c069-5b20-4835-941c-48eb3b04c051] Running
	I1129 09:04:34.573321  523936 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-357829" [63ae458a-4c90-430f-abf2-89d10486fa11] Running
	I1129 09:04:34.573331  523936 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-357829" [25e4471e-c878-4e47-b809-119821095e31] Running
	I1129 09:04:34.573344  523936 system_pods.go:89] "kube-proxy-v9bbz" [6a515c70-840f-41c2-b1e4-6de13b23e5f3] Running
	I1129 09:04:34.573353  523936 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-357829" [e32668d0-29c4-4ce1-aa19-dc8722f0eae3] Running
	I1129 09:04:34.573359  523936 system_pods.go:89] "storage-provisioner" [d9aa47c6-1005-4a91-a986-819f21c0cfda] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:04:34.573385  523936 retry.go:31] will retry after 240.595886ms: missing components: kube-dns
	I1129 09:04:34.819020  523936 system_pods.go:86] 8 kube-system pods found
	I1129 09:04:34.819061  523936 system_pods.go:89] "coredns-66bc5c9577-d7vmg" [4ebe88f4-4c20-4523-8642-f54615c1f605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:04:34.819070  523936 system_pods.go:89] "etcd-default-k8s-diff-port-357829" [6c4c6f16-3f64-4497-b97f-2a671753712e] Running
	I1129 09:04:34.819078  523936 system_pods.go:89] "kindnet-g5whk" [5563c069-5b20-4835-941c-48eb3b04c051] Running
	I1129 09:04:34.819083  523936 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-357829" [63ae458a-4c90-430f-abf2-89d10486fa11] Running
	I1129 09:04:34.819088  523936 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-357829" [25e4471e-c878-4e47-b809-119821095e31] Running
	I1129 09:04:34.819144  523936 system_pods.go:89] "kube-proxy-v9bbz" [6a515c70-840f-41c2-b1e4-6de13b23e5f3] Running
	I1129 09:04:34.819160  523936 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-357829" [e32668d0-29c4-4ce1-aa19-dc8722f0eae3] Running
	I1129 09:04:34.819171  523936 system_pods.go:89] "storage-provisioner" [d9aa47c6-1005-4a91-a986-819f21c0cfda] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:04:34.819198  523936 retry.go:31] will retry after 269.38889ms: missing components: kube-dns
	I1129 09:04:35.093485  523936 system_pods.go:86] 8 kube-system pods found
	I1129 09:04:35.093521  523936 system_pods.go:89] "coredns-66bc5c9577-d7vmg" [4ebe88f4-4c20-4523-8642-f54615c1f605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:04:35.093529  523936 system_pods.go:89] "etcd-default-k8s-diff-port-357829" [6c4c6f16-3f64-4497-b97f-2a671753712e] Running
	I1129 09:04:35.093536  523936 system_pods.go:89] "kindnet-g5whk" [5563c069-5b20-4835-941c-48eb3b04c051] Running
	I1129 09:04:35.093544  523936 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-357829" [63ae458a-4c90-430f-abf2-89d10486fa11] Running
	I1129 09:04:35.093550  523936 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-357829" [25e4471e-c878-4e47-b809-119821095e31] Running
	I1129 09:04:35.093561  523936 system_pods.go:89] "kube-proxy-v9bbz" [6a515c70-840f-41c2-b1e4-6de13b23e5f3] Running
	I1129 09:04:35.093566  523936 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-357829" [e32668d0-29c4-4ce1-aa19-dc8722f0eae3] Running
	I1129 09:04:35.093574  523936 system_pods.go:89] "storage-provisioner" [d9aa47c6-1005-4a91-a986-819f21c0cfda] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:04:35.093596  523936 retry.go:31] will retry after 339.116297ms: missing components: kube-dns
	I1129 09:04:35.436348  523936 system_pods.go:86] 8 kube-system pods found
	I1129 09:04:35.436386  523936 system_pods.go:89] "coredns-66bc5c9577-d7vmg" [4ebe88f4-4c20-4523-8642-f54615c1f605] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:04:35.436402  523936 system_pods.go:89] "etcd-default-k8s-diff-port-357829" [6c4c6f16-3f64-4497-b97f-2a671753712e] Running
	I1129 09:04:35.436412  523936 system_pods.go:89] "kindnet-g5whk" [5563c069-5b20-4835-941c-48eb3b04c051] Running
	I1129 09:04:35.436418  523936 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-357829" [63ae458a-4c90-430f-abf2-89d10486fa11] Running
	I1129 09:04:35.436424  523936 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-357829" [25e4471e-c878-4e47-b809-119821095e31] Running
	I1129 09:04:35.436440  523936 system_pods.go:89] "kube-proxy-v9bbz" [6a515c70-840f-41c2-b1e4-6de13b23e5f3] Running
	I1129 09:04:35.436450  523936 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-357829" [e32668d0-29c4-4ce1-aa19-dc8722f0eae3] Running
	I1129 09:04:35.436458  523936 system_pods.go:89] "storage-provisioner" [d9aa47c6-1005-4a91-a986-819f21c0cfda] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 09:04:35.436483  523936 retry.go:31] will retry after 502.551226ms: missing components: kube-dns
	I1129 09:04:35.944352  523936 system_pods.go:86] 8 kube-system pods found
	I1129 09:04:35.944384  523936 system_pods.go:89] "coredns-66bc5c9577-d7vmg" [4ebe88f4-4c20-4523-8642-f54615c1f605] Running
	I1129 09:04:35.944393  523936 system_pods.go:89] "etcd-default-k8s-diff-port-357829" [6c4c6f16-3f64-4497-b97f-2a671753712e] Running
	I1129 09:04:35.944399  523936 system_pods.go:89] "kindnet-g5whk" [5563c069-5b20-4835-941c-48eb3b04c051] Running
	I1129 09:04:35.944405  523936 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-357829" [63ae458a-4c90-430f-abf2-89d10486fa11] Running
	I1129 09:04:35.944412  523936 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-357829" [25e4471e-c878-4e47-b809-119821095e31] Running
	I1129 09:04:35.944416  523936 system_pods.go:89] "kube-proxy-v9bbz" [6a515c70-840f-41c2-b1e4-6de13b23e5f3] Running
	I1129 09:04:35.944421  523936 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-357829" [e32668d0-29c4-4ce1-aa19-dc8722f0eae3] Running
	I1129 09:04:35.944428  523936 system_pods.go:89] "storage-provisioner" [d9aa47c6-1005-4a91-a986-819f21c0cfda] Running
	I1129 09:04:35.944438  523936 system_pods.go:126] duration metric: took 1.373786209s to wait for k8s-apps to be running ...
	I1129 09:04:35.944451  523936 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:04:35.944504  523936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:04:35.960002  523936 system_svc.go:56] duration metric: took 15.53873ms WaitForService to wait for kubelet
	I1129 09:04:35.960037  523936 kubeadm.go:587] duration metric: took 13.344314213s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:04:35.960066  523936 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:04:35.963318  523936 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:04:35.963346  523936 node_conditions.go:123] node cpu capacity is 8
	I1129 09:04:35.963368  523936 node_conditions.go:105] duration metric: took 3.295125ms to run NodePressure ...
	I1129 09:04:35.963386  523936 start.go:242] waiting for startup goroutines ...
	I1129 09:04:35.963399  523936 start.go:247] waiting for cluster config update ...
	I1129 09:04:35.963413  523936 start.go:256] writing updated cluster config ...
	I1129 09:04:35.965023  523936 ssh_runner.go:195] Run: rm -f paused
	I1129 09:04:35.970088  523936 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:04:35.974452  523936 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d7vmg" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:04:35.978573  523936 pod_ready.go:94] pod "coredns-66bc5c9577-d7vmg" is "Ready"
	I1129 09:04:35.978596  523936 pod_ready.go:86] duration metric: took 4.117951ms for pod "coredns-66bc5c9577-d7vmg" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:04:35.980588  523936 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-357829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:04:35.984121  523936 pod_ready.go:94] pod "etcd-default-k8s-diff-port-357829" is "Ready"
	I1129 09:04:35.984141  523936 pod_ready.go:86] duration metric: took 3.533985ms for pod "etcd-default-k8s-diff-port-357829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:04:35.985915  523936 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-357829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:04:35.989545  523936 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-357829" is "Ready"
	I1129 09:04:35.989564  523936 pod_ready.go:86] duration metric: took 3.629987ms for pod "kube-apiserver-default-k8s-diff-port-357829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:04:35.991561  523936 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-357829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:04:36.374930  523936 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-357829" is "Ready"
	I1129 09:04:36.374962  523936 pod_ready.go:86] duration metric: took 383.383298ms for pod "kube-controller-manager-default-k8s-diff-port-357829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:04:36.759664  523936 pod_ready.go:83] waiting for pod "kube-proxy-v9bbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:04:37.105995  523936 pod_ready.go:94] pod "kube-proxy-v9bbz" is "Ready"
	I1129 09:04:37.106029  523936 pod_ready.go:86] duration metric: took 346.332162ms for pod "kube-proxy-v9bbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:04:37.234321  523936 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-357829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:04:37.576218  523936 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-357829" is "Ready"
	I1129 09:04:37.576257  523936 pod_ready.go:86] duration metric: took 341.904608ms for pod "kube-scheduler-default-k8s-diff-port-357829" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:04:37.576272  523936 pod_ready.go:40] duration metric: took 1.606153731s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:04:37.631023  523936 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:04:36.898299  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:04:37.398447  528649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 09:04:37.589785  528649 kubeadm.go:1114] duration metric: took 5.282496677s to wait for elevateKubeSystemPrivileges
	I1129 09:04:37.589847  528649 kubeadm.go:403] duration metric: took 17.940955667s to StartCluster
	I1129 09:04:37.589873  528649 settings.go:142] acquiring lock: {Name:mk6dbed29e5e99d89b1cbbd9e561d8f8791ae9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:37.590126  528649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:04:37.592406  528649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:37.704321  528649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 09:04:37.704346  528649 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:04:37.704474  528649 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:04:37.710225  528649 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-106601"
	I1129 09:04:37.704598  528649 config.go:182] Loaded profile config "newest-cni-106601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:37.710254  528649 addons.go:70] Setting default-storageclass=true in profile "newest-cni-106601"
	I1129 09:04:37.710280  528649 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-106601"
	I1129 09:04:37.710297  528649 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-106601"
	I1129 09:04:37.710349  528649 host.go:66] Checking if "newest-cni-106601" exists ...
	I1129 09:04:37.710619  528649 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:37.710896  528649 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:32.928708  535908 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1129 09:04:32.928925  535908 start.go:159] libmachine.API.Create for "auto-770004" (driver="docker")
	I1129 09:04:32.928958  535908 client.go:173] LocalClient.Create starting
	I1129 09:04:32.929039  535908 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem
	I1129 09:04:32.929071  535908 main.go:143] libmachine: Decoding PEM data...
	I1129 09:04:32.929090  535908 main.go:143] libmachine: Parsing certificate...
	I1129 09:04:32.929138  535908 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem
	I1129 09:04:32.929156  535908 main.go:143] libmachine: Decoding PEM data...
	I1129 09:04:32.929167  535908 main.go:143] libmachine: Parsing certificate...
	I1129 09:04:32.929509  535908 cli_runner.go:164] Run: docker network inspect auto-770004 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1129 09:04:32.947779  535908 cli_runner.go:211] docker network inspect auto-770004 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1129 09:04:32.947847  535908 network_create.go:284] running [docker network inspect auto-770004] to gather additional debugging logs...
	I1129 09:04:32.947872  535908 cli_runner.go:164] Run: docker network inspect auto-770004
	W1129 09:04:32.966851  535908 cli_runner.go:211] docker network inspect auto-770004 returned with exit code 1
	I1129 09:04:32.966885  535908 network_create.go:287] error running [docker network inspect auto-770004]: docker network inspect auto-770004: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-770004 not found
	I1129 09:04:32.966901  535908 network_create.go:289] output of [docker network inspect auto-770004]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-770004 not found
	
	** /stderr **
	I1129 09:04:32.967172  535908 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:04:32.984590  535908 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f69c672bf913 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:40:f4:ed:4f:ab} reservation:<nil>}
	I1129 09:04:32.985291  535908 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-96d20aff5877 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:c2:01:e2:a3:b8:33} reservation:<nil>}
	I1129 09:04:32.985983  535908 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f7906c56f869 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:29:75:e3:e0:7f} reservation:<nil>}
	I1129 09:04:32.986437  535908 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0c5a65c00c45 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:ff:d4:90:39:a6} reservation:<nil>}
	I1129 09:04:32.987264  535908 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ddfa00}
	I1129 09:04:32.987291  535908 network_create.go:124] attempt to create docker network auto-770004 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1129 09:04:32.987331  535908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-770004 auto-770004
	I1129 09:04:33.035373  535908 network_create.go:108] docker network auto-770004 192.168.85.0/24 created
	I1129 09:04:33.035409  535908 kic.go:121] calculated static IP "192.168.85.2" for the "auto-770004" container
	I1129 09:04:33.035480  535908 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1129 09:04:33.054305  535908 cli_runner.go:164] Run: docker volume create auto-770004 --label name.minikube.sigs.k8s.io=auto-770004 --label created_by.minikube.sigs.k8s.io=true
	I1129 09:04:33.072263  535908 oci.go:103] Successfully created a docker volume auto-770004
	I1129 09:04:33.072342  535908 cli_runner.go:164] Run: docker run --rm --name auto-770004-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-770004 --entrypoint /usr/bin/test -v auto-770004:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1129 09:04:33.470898  535908 oci.go:107] Successfully prepared a docker volume auto-770004
	I1129 09:04:33.470993  535908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:04:33.471034  535908 kic.go:194] Starting extracting preloaded images to volume ...
	I1129 09:04:33.471111  535908 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-770004:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1129 09:04:37.737223  523936 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-357829" cluster and "default" namespace by default
	I1129 09:04:37.733891  528649 addons.go:239] Setting addon default-storageclass=true in "newest-cni-106601"
	I1129 09:04:37.733946  528649 host.go:66] Checking if "newest-cni-106601" exists ...
	I1129 09:04:37.734438  528649 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:37.737930  528649 out.go:179] * Verifying Kubernetes components...
	I1129 09:04:37.754595  528649 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:04:37.801267  528649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:04:37.801365  528649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:37.802890  528649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:04:37.813403  528649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 09:04:37.824877  528649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:37.922303  528649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:04:37.922339  528649 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:04:37.922358  528649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:04:37.922423  528649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:37.959490  528649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:37.964412  528649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:04:38.089278  528649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:04:38.174785  528649 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1129 09:04:38.174859  528649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:04:38.482383  528649 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1129 09:04:38.482391  528649 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:04:38.482505  528649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:04:38.483695  528649 addons.go:530] duration metric: took 779.217611ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1129 09:04:38.502053  528649 api_server.go:72] duration metric: took 797.651684ms to wait for apiserver process to appear ...
	I1129 09:04:38.502084  528649 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:04:38.502109  528649 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1129 09:04:38.508728  528649 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1129 09:04:38.510072  528649 api_server.go:141] control plane version: v1.34.1
	I1129 09:04:38.510099  528649 api_server.go:131] duration metric: took 8.007502ms to wait for apiserver health ...
	I1129 09:04:38.510109  528649 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:04:38.514103  528649 system_pods.go:59] 8 kube-system pods found
	I1129 09:04:38.514153  528649 system_pods.go:61] "coredns-66bc5c9577-4mxwq" [7b54792b-e66c-432c-b4a7-7f15f57555f5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 09:04:38.514166  528649 system_pods.go:61] "etcd-newest-cni-106601" [e2a22119-5b3b-4d11-bffe-da1dbb76eab7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:04:38.514191  528649 system_pods.go:61] "kindnet-p5rq4" [9f2c3030-260d-4d3e-abb6-78cf354db315] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1129 09:04:38.514202  528649 system_pods.go:61] "kube-apiserver-newest-cni-106601" [0ea8fc03-3005-45f6-bc3d-066fd1eed103] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:04:38.514219  528649 system_pods.go:61] "kube-controller-manager-newest-cni-106601" [323a81f8-112b-42d0-a942-b2494fad8885] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:04:38.514228  528649 system_pods.go:61] "kube-proxy-bl4qs" [c9f6f412-35f5-4b59-b2c9-3f04168c0465] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:04:38.514235  528649 system_pods.go:61] "kube-scheduler-newest-cni-106601" [f1715e2f-2e77-4fd7-8897-766369ff102e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:04:38.514245  528649 system_pods.go:61] "storage-provisioner" [19712229-e892-443a-8cbf-4f9ec88adf63] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1129 09:04:38.514253  528649 system_pods.go:74] duration metric: took 4.137954ms to wait for pod list to return data ...
	I1129 09:04:38.514272  528649 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:04:38.517261  528649 default_sa.go:45] found service account: "default"
	I1129 09:04:38.517286  528649 default_sa.go:55] duration metric: took 3.006756ms for default service account to be created ...
	I1129 09:04:38.517300  528649 kubeadm.go:587] duration metric: took 812.906642ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:04:38.517327  528649 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:04:38.520993  528649 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1129 09:04:38.521099  528649 node_conditions.go:123] node cpu capacity is 8
	I1129 09:04:38.521135  528649 node_conditions.go:105] duration metric: took 3.801697ms to run NodePressure ...
	I1129 09:04:38.521181  528649 start.go:242] waiting for startup goroutines ...
	I1129 09:04:38.680523  528649 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-106601" context rescaled to 1 replicas
	I1129 09:04:38.680571  528649 start.go:247] waiting for cluster config update ...
	I1129 09:04:38.680587  528649 start.go:256] writing updated cluster config ...
	I1129 09:04:38.680912  528649 ssh_runner.go:195] Run: rm -f paused
	I1129 09:04:38.750983  528649 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:04:38.752801  528649 out.go:179] * Done! kubectl is now configured to use "newest-cni-106601" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	dd55171d68c77       56cc512116c8f       10 seconds ago      Running             busybox                   0                   6f6e3b02a0ee2       busybox                                      default
	3d08a463e6ad8       52546a367cc9e       16 seconds ago      Running             coredns                   0                   0cb648ccf514a       coredns-66bc5c9577-sz2td                     kube-system
	d54b901347f40       6e38f40d628db       16 seconds ago      Running             storage-provisioner       0                   d674c2e459d17       storage-provisioner                          kube-system
	af29e75e8081c       409467f978b4a       27 seconds ago      Running             kindnet-cni               0                   ac240c4cc6401       kindnet-k5955                                kube-system
	a160c497eeea6       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   fd597c18e12f8       kube-proxy-prv6p                             kube-system
	889bc1e303b19       7dd6aaa1717ab       39 seconds ago      Running             kube-scheduler            0                   3016bab129e62       kube-scheduler-embed-certs-976238            kube-system
	957926049f5ef       c80c8dbafe7dd       39 seconds ago      Running             kube-controller-manager   0                   caadaa0aa4ea9       kube-controller-manager-embed-certs-976238   kube-system
	576a1e0a480b2       c3994bc696102       39 seconds ago      Running             kube-apiserver            0                   25b754b728260       kube-apiserver-embed-certs-976238            kube-system
	0ffa2844b911a       5f1f5298c888d       39 seconds ago      Running             etcd                      0                   ae9e84e8fbfc3       etcd-embed-certs-976238                      kube-system
	
	
	==> containerd <==
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.791299648Z" level=info msg="Container 3d08a463e6ad880d8ddd772750b65fab105a54e78bc396c09ab6e425aa191ce7: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.794505636Z" level=info msg="CreateContainer within sandbox \"d674c2e459d17bd8b042d0a10da3e95b633fec12d1a59419ab7e4c4835297aff\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"d54b901347f406aa3719940a4658b89f3c2a83e1de281b1e3ab1a3b70f37b029\""
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.795068467Z" level=info msg="StartContainer for \"d54b901347f406aa3719940a4658b89f3c2a83e1de281b1e3ab1a3b70f37b029\""
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.796333066Z" level=info msg="connecting to shim d54b901347f406aa3719940a4658b89f3c2a83e1de281b1e3ab1a3b70f37b029" address="unix:///run/containerd/s/ae8a13c5e29679193a47c33a520f0e7db8e30e1b1a014922e88d1f895455e018" protocol=ttrpc version=3
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.799983799Z" level=info msg="CreateContainer within sandbox \"0cb648ccf514a04e7c606e62d3205b84ae3686418ec59196dd7117ceb2628295\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d08a463e6ad880d8ddd772750b65fab105a54e78bc396c09ab6e425aa191ce7\""
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.800880296Z" level=info msg="StartContainer for \"3d08a463e6ad880d8ddd772750b65fab105a54e78bc396c09ab6e425aa191ce7\""
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.802981936Z" level=info msg="connecting to shim 3d08a463e6ad880d8ddd772750b65fab105a54e78bc396c09ab6e425aa191ce7" address="unix:///run/containerd/s/a30f3f9facc588f9011512b86bfd76b3f5f86a6834f6dc341827250e97b9543d" protocol=ttrpc version=3
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.855805854Z" level=info msg="StartContainer for \"d54b901347f406aa3719940a4658b89f3c2a83e1de281b1e3ab1a3b70f37b029\" returns successfully"
	Nov 29 09:04:23 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:23.870348074Z" level=info msg="StartContainer for \"3d08a463e6ad880d8ddd772750b65fab105a54e78bc396c09ab6e425aa191ce7\" returns successfully"
	Nov 29 09:04:26 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:26.751222788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:dc39d248-15e7-409d-be52-e01d5a094726,Namespace:default,Attempt:0,}"
	Nov 29 09:04:26 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:26.797976137Z" level=info msg="connecting to shim 6f6e3b02a0ee26da7854b922b28b5028dcc8c006edfb7e2d43007ce3a402a1b9" address="unix:///run/containerd/s/d246f54ee907b99e916171a4462a5c8d6bdf4a662d767261b183ed2853248f8f" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:04:26 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:26.883803045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:dc39d248-15e7-409d-be52-e01d5a094726,Namespace:default,Attempt:0,} returns sandbox id \"6f6e3b02a0ee26da7854b922b28b5028dcc8c006edfb7e2d43007ce3a402a1b9\""
	Nov 29 09:04:26 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:26.886676269Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.142242797Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.142943604Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.143977541Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.145860824Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.146249978Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.25944198s"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.146294301Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.150361265Z" level=info msg="CreateContainer within sandbox \"6f6e3b02a0ee26da7854b922b28b5028dcc8c006edfb7e2d43007ce3a402a1b9\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.156677687Z" level=info msg="Container dd55171d68c77a1b046242badc74dcba50315eb6badf827a4a8510b73cf77d5d: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.162210510Z" level=info msg="CreateContainer within sandbox \"6f6e3b02a0ee26da7854b922b28b5028dcc8c006edfb7e2d43007ce3a402a1b9\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"dd55171d68c77a1b046242badc74dcba50315eb6badf827a4a8510b73cf77d5d\""
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.162809777Z" level=info msg="StartContainer for \"dd55171d68c77a1b046242badc74dcba50315eb6badf827a4a8510b73cf77d5d\""
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.163745993Z" level=info msg="connecting to shim dd55171d68c77a1b046242badc74dcba50315eb6badf827a4a8510b73cf77d5d" address="unix:///run/containerd/s/d246f54ee907b99e916171a4462a5c8d6bdf4a662d767261b183ed2853248f8f" protocol=ttrpc version=3
	Nov 29 09:04:29 embed-certs-976238 containerd[658]: time="2025-11-29T09:04:29.217141529Z" level=info msg="StartContainer for \"dd55171d68c77a1b046242badc74dcba50315eb6badf827a4a8510b73cf77d5d\" returns successfully"
	
	
	==> coredns [3d08a463e6ad880d8ddd772750b65fab105a54e78bc396c09ab6e425aa191ce7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43947 - 5403 "HINFO IN 6459388017159108501.114435138945518211. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.034969011s
	
	
	==> describe nodes <==
	Name:               embed-certs-976238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-976238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=embed-certs-976238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_04_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:04:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-976238
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:04:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:04:35 +0000   Sat, 29 Nov 2025 09:04:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:04:35 +0000   Sat, 29 Nov 2025 09:04:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:04:35 +0000   Sat, 29 Nov 2025 09:04:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:04:35 +0000   Sat, 29 Nov 2025 09:04:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-976238
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                d886b6c4-ae69-4d2d-bf27-37d976e31f50
	  Boot ID:                    b81dce2f-73d5-4349-b473-aa1210058cb8
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-sz2td                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-embed-certs-976238                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-k5955                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-embed-certs-976238             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-embed-certs-976238    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-prv6p                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-embed-certs-976238             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node embed-certs-976238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node embed-certs-976238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x7 over 40s)  kubelet          Node embed-certs-976238 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  35s                kubelet          Node embed-certs-976238 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s                kubelet          Node embed-certs-976238 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s                kubelet          Node embed-certs-976238 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node embed-certs-976238 event: Registered Node embed-certs-976238 in Controller
	  Normal  NodeReady                17s                kubelet          Node embed-certs-976238 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 07:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001881] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084003] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.378167] i8042: Warning: Keylock active
	[  +0.012106] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.460417] block sda: the capability attribute has been deprecated.
	[  +0.079627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021012] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.285522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [0ffa2844b911a1c5b0c8ac6dbe8eee086c24ffbc4c864cec6d90e875a1a16e8a] <==
	{"level":"info","ts":"2025-11-29T09:04:11.472033Z","caller":"traceutil/trace.go:172","msg":"trace[848082240] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"202.912685ms","start":"2025-11-29T09:04:11.269106Z","end":"2025-11-29T09:04:11.472019Z","steps":["trace[848082240] 'process raft request'  (duration: 202.56652ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.471944Z","caller":"traceutil/trace.go:172","msg":"trace[2087248779] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"205.879787ms","start":"2025-11-29T09:04:11.266053Z","end":"2025-11-29T09:04:11.471933Z","steps":["trace[2087248779] 'process raft request'  (duration: 205.05413ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.472285Z","caller":"traceutil/trace.go:172","msg":"trace[328840537] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"194.452275ms","start":"2025-11-29T09:04:11.277821Z","end":"2025-11-29T09:04:11.472273Z","steps":["trace[328840537] 'process raft request'  (duration: 193.947835ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.472321Z","caller":"traceutil/trace.go:172","msg":"trace[263126455] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"202.362956ms","start":"2025-11-29T09:04:11.269948Z","end":"2025-11-29T09:04:11.472311Z","steps":["trace[263126455] 'process raft request'  (duration: 201.758522ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.472383Z","caller":"traceutil/trace.go:172","msg":"trace[1479975396] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"203.388231ms","start":"2025-11-29T09:04:11.268987Z","end":"2025-11-29T09:04:11.472376Z","steps":["trace[1479975396] 'process raft request'  (duration: 202.589992ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.472474Z","caller":"traceutil/trace.go:172","msg":"trace[991560380] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"203.902773ms","start":"2025-11-29T09:04:11.268564Z","end":"2025-11-29T09:04:11.472467Z","steps":["trace[991560380] 'process raft request'  (duration: 202.950005ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.723956Z","caller":"traceutil/trace.go:172","msg":"trace[472951656] linearizableReadLoop","detail":"{readStateIndex:388; appliedIndex:388; }","duration":"162.352265ms","start":"2025-11-29T09:04:11.561580Z","end":"2025-11-29T09:04:11.723933Z","steps":["trace[472951656] 'read index received'  (duration: 162.344086ms)","trace[472951656] 'applied index is now lower than readState.Index'  (duration: 7.096µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:04:11.724495Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"234.102918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:2878"}
	{"level":"info","ts":"2025-11-29T09:04:11.725497Z","caller":"traceutil/trace.go:172","msg":"trace[1634254383] range","detail":"{range_begin:/registry/daemonsets/kube-system/kube-proxy; range_end:; response_count:1; response_revision:377; }","duration":"235.127914ms","start":"2025-11-29T09:04:11.490355Z","end":"2025-11-29T09:04:11.725483Z","steps":["trace[1634254383] 'agreement among raft nodes before linearized reading'  (duration: 233.726265ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.724924Z","caller":"traceutil/trace.go:172","msg":"trace[176791064] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"228.817642ms","start":"2025-11-29T09:04:11.496093Z","end":"2025-11-29T09:04:11.724911Z","steps":["trace[176791064] 'process raft request'  (duration: 228.771631ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.724959Z","caller":"traceutil/trace.go:172","msg":"trace[1122891804] transaction","detail":"{read_only:false; response_revision:378; number_of_response:1; }","duration":"237.103011ms","start":"2025-11-29T09:04:11.487851Z","end":"2025-11-29T09:04:11.724954Z","steps":["trace[1122891804] 'process raft request'  (duration: 236.287787ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.725125Z","caller":"traceutil/trace.go:172","msg":"trace[1278496739] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"235.134079ms","start":"2025-11-29T09:04:11.489981Z","end":"2025-11-29T09:04:11.725115Z","steps":["trace[1278496739] 'process raft request'  (duration: 234.763037ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.725152Z","caller":"traceutil/trace.go:172","msg":"trace[2081360934] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"233.055169ms","start":"2025-11-29T09:04:11.492091Z","end":"2025-11-29T09:04:11.725146Z","steps":["trace[2081360934] 'process raft request'  (duration: 232.74166ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.731636Z","caller":"traceutil/trace.go:172","msg":"trace[137997452] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"166.17819ms","start":"2025-11-29T09:04:11.565415Z","end":"2025-11-29T09:04:11.731593Z","steps":["trace[137997452] 'process raft request'  (duration: 165.751628ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.732073Z","caller":"traceutil/trace.go:172","msg":"trace[979106742] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"161.052435ms","start":"2025-11-29T09:04:11.571010Z","end":"2025-11-29T09:04:11.732063Z","steps":["trace[979106742] 'process raft request'  (duration: 160.25485ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:04:11.736991Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.725094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2025-11-29T09:04:11.737544Z","caller":"traceutil/trace.go:172","msg":"trace[2133078243] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:382; }","duration":"148.287197ms","start":"2025-11-29T09:04:11.589242Z","end":"2025-11-29T09:04:11.737529Z","steps":["trace[2133078243] 'agreement among raft nodes before linearized reading'  (duration: 142.709646ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.894674Z","caller":"traceutil/trace.go:172","msg":"trace[706579059] linearizableReadLoop","detail":"{readStateIndex:396; appliedIndex:396; }","duration":"139.02579ms","start":"2025-11-29T09:04:11.755627Z","end":"2025-11-29T09:04:11.894653Z","steps":["trace[706579059] 'read index received'  (duration: 139.019091ms)","trace[706579059] 'applied index is now lower than readState.Index'  (duration: 5.568µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:04:11.956673Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"201.013279ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T09:04:11.956817Z","caller":"traceutil/trace.go:172","msg":"trace[1785179962] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:385; }","duration":"201.170542ms","start":"2025-11-29T09:04:11.755623Z","end":"2025-11-29T09:04:11.956794Z","steps":["trace[1785179962] 'agreement among raft nodes before linearized reading'  (duration: 139.257564ms)","trace[1785179962] 'range keys from in-memory index tree'  (duration: 61.707341ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-29T09:04:11.956827Z","caller":"traceutil/trace.go:172","msg":"trace[598877949] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"203.609347ms","start":"2025-11-29T09:04:11.753202Z","end":"2025-11-29T09:04:11.956812Z","steps":["trace[598877949] 'process raft request'  (duration: 141.835989ms)","trace[598877949] 'compare'  (duration: 61.64539ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:04:11.960140Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.447304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4299"}
	{"level":"info","ts":"2025-11-29T09:04:11.960200Z","caller":"traceutil/trace.go:172","msg":"trace[1605401434] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:386; }","duration":"194.518224ms","start":"2025-11-29T09:04:11.765672Z","end":"2025-11-29T09:04:11.960190Z","steps":["trace[1605401434] 'agreement among raft nodes before linearized reading'  (duration: 194.344334ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:11.960620Z","caller":"traceutil/trace.go:172","msg":"trace[1647820491] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"200.53021ms","start":"2025-11-29T09:04:11.760068Z","end":"2025-11-29T09:04:11.960598Z","steps":["trace[1647820491] 'process raft request'  (duration: 200.273546ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:38.089457Z","caller":"traceutil/trace.go:172","msg":"trace[1833502831] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"107.146942ms","start":"2025-11-29T09:04:37.982288Z","end":"2025-11-29T09:04:38.089435Z","steps":["trace[1833502831] 'process raft request'  (duration: 106.962198ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:04:40 up  1:47,  0 user,  load average: 5.94, 3.76, 11.47
	Linux embed-certs-976238 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [af29e75e8081ce6ac2e6ffb68d826e85fb57d344b529b867ca7cc0d8f6f6194c] <==
	I1129 09:04:13.025714       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:04:13.026102       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1129 09:04:13.026249       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:04:13.026263       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:04:13.026286       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:04:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:04:13.227629       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:04:13.280129       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:04:13.280147       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:04:13.280269       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:04:13.580271       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:04:13.580302       1 metrics.go:72] Registering metrics
	I1129 09:04:13.580379       1 controller.go:711] "Syncing nftables rules"
	I1129 09:04:23.232344       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:04:23.232405       1 main.go:301] handling current node
	I1129 09:04:33.228192       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1129 09:04:33.228242       1 main.go:301] handling current node
	
	
	==> kube-apiserver [576a1e0a480b29af60f77b33edb8f6d693f5b9f7ff8b1eb26756e23a22dde168] <==
	I1129 09:04:02.783928       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 09:04:02.785121       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:04:02.795489       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:04:02.800267       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:04:02.800426       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:02.801918       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:02.834704       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:04:03.688928       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:04:03.693085       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:04:03.693105       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:04:04.225905       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:04:04.265639       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:04:04.394718       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:04:04.401154       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1129 09:04:04.402224       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:04:04.406769       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:04:04.747405       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:04:05.399324       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:04:05.411308       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:04:05.418370       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:04:10.066907       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:10.195951       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:10.637981       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:04:10.894987       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1129 09:04:34.591894       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:50186: use of closed network connection
	
	
	==> kube-controller-manager [957926049f5ef9ca6313979ec1f6a3ba063873beb80384c68e46faf5c8d293c8] <==
	I1129 09:04:09.748538       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:04:09.751003       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:04:09.751038       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 09:04:09.751066       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 09:04:09.752444       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1129 09:04:09.752492       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1129 09:04:09.752514       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1129 09:04:09.752518       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1129 09:04:09.752529       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1129 09:04:09.756297       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:04:09.756707       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 09:04:09.757979       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:04:09.758048       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:04:09.758067       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:04:09.759403       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:04:09.767643       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:04:09.773023       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:04:09.777406       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-976238" podCIDRs=["10.244.0.0/24"]
	I1129 09:04:09.779931       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:04:09.785321       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:04:09.791721       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1129 09:04:09.795933       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:04:09.798145       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:04:09.806816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:04:24.703781       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a160c497eeea6ae4a856d37924cbee711fad64a776f2b530408e36774150397e] <==
	I1129 09:04:12.372059       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:04:12.465813       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:04:12.566818       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:04:12.566996       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1129 09:04:12.567169       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:04:12.661415       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:04:12.661531       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:04:12.669543       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:04:12.670582       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:04:12.670620       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:04:12.673251       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:04:12.673274       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:04:12.673312       1 config.go:200] "Starting service config controller"
	I1129 09:04:12.673318       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:04:12.673679       1 config.go:309] "Starting node config controller"
	I1129 09:04:12.673689       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:04:12.673696       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:04:12.673882       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:04:12.673893       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:04:12.773383       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:04:12.773436       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:04:12.776291       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [889bc1e303b190e5b435a081a4fa45511ba81f5faec06e2bc70c3429f3972219] <==
	E1129 09:04:02.754234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:04:02.754270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:04:02.754369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:04:02.754508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:04:02.754606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:04:02.754692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:04:02.754756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:04:02.755246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:04:02.755287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:04:02.755807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:04:02.755834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:04:02.755813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:04:02.755986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:04:02.756043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:04:03.587723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:04:03.689248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:04:03.696333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:04:03.711520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 09:04:03.728165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:04:03.823074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:04:03.867774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:04:03.923999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:04:04.019857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:04:04.047220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1129 09:04:05.647642       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:04:06 embed-certs-976238 kubelet[1438]: E1129 09:04:06.276543    1438 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-embed-certs-976238\" already exists" pod="kube-system/kube-apiserver-embed-certs-976238"
	Nov 29 09:04:06 embed-certs-976238 kubelet[1438]: I1129 09:04:06.301641    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-976238" podStartSLOduration=1.301616588 podStartE2EDuration="1.301616588s" podCreationTimestamp="2025-11-29 09:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:06.288772055 +0000 UTC m=+1.131555275" watchObservedRunningTime="2025-11-29 09:04:06.301616588 +0000 UTC m=+1.144399799"
	Nov 29 09:04:06 embed-certs-976238 kubelet[1438]: I1129 09:04:06.302036    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-976238" podStartSLOduration=1.302020055 podStartE2EDuration="1.302020055s" podCreationTimestamp="2025-11-29 09:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:06.301568982 +0000 UTC m=+1.144352197" watchObservedRunningTime="2025-11-29 09:04:06.302020055 +0000 UTC m=+1.144803268"
	Nov 29 09:04:06 embed-certs-976238 kubelet[1438]: I1129 09:04:06.324670    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-976238" podStartSLOduration=1.324649539 podStartE2EDuration="1.324649539s" podCreationTimestamp="2025-11-29 09:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:06.311916558 +0000 UTC m=+1.154699794" watchObservedRunningTime="2025-11-29 09:04:06.324649539 +0000 UTC m=+1.167432763"
	Nov 29 09:04:06 embed-certs-976238 kubelet[1438]: I1129 09:04:06.324808    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-976238" podStartSLOduration=1.324800879 podStartE2EDuration="1.324800879s" podCreationTimestamp="2025-11-29 09:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:06.324789028 +0000 UTC m=+1.167572237" watchObservedRunningTime="2025-11-29 09:04:06.324800879 +0000 UTC m=+1.167584093"
	Nov 29 09:04:09 embed-certs-976238 kubelet[1438]: I1129 09:04:09.810526    1438 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:04:09 embed-certs-976238 kubelet[1438]: I1129 09:04:09.811264    1438 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.485023    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd-cni-cfg\") pod \"kindnet-k5955\" (UID: \"709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd\") " pod="kube-system/kindnet-k5955"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.485545    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd-xtables-lock\") pod \"kindnet-k5955\" (UID: \"709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd\") " pod="kube-system/kindnet-k5955"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.486188    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd-lib-modules\") pod \"kindnet-k5955\" (UID: \"709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd\") " pod="kube-system/kindnet-k5955"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.486327    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqnll\" (UniqueName: \"kubernetes.io/projected/709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd-kube-api-access-sqnll\") pod \"kindnet-k5955\" (UID: \"709dfd6b-61e8-43aa-97ee-f1c6adeb5fbd\") " pod="kube-system/kindnet-k5955"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.788357    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/193bf7f7-0d38-4430-b64e-1c2c3b617d08-kube-proxy\") pod \"kube-proxy-prv6p\" (UID: \"193bf7f7-0d38-4430-b64e-1c2c3b617d08\") " pod="kube-system/kube-proxy-prv6p"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.789165    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/193bf7f7-0d38-4430-b64e-1c2c3b617d08-lib-modules\") pod \"kube-proxy-prv6p\" (UID: \"193bf7f7-0d38-4430-b64e-1c2c3b617d08\") " pod="kube-system/kube-proxy-prv6p"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.789210    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5t58\" (UniqueName: \"kubernetes.io/projected/193bf7f7-0d38-4430-b64e-1c2c3b617d08-kube-api-access-f5t58\") pod \"kube-proxy-prv6p\" (UID: \"193bf7f7-0d38-4430-b64e-1c2c3b617d08\") " pod="kube-system/kube-proxy-prv6p"
	Nov 29 09:04:11 embed-certs-976238 kubelet[1438]: I1129 09:04:11.789246    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/193bf7f7-0d38-4430-b64e-1c2c3b617d08-xtables-lock\") pod \"kube-proxy-prv6p\" (UID: \"193bf7f7-0d38-4430-b64e-1c2c3b617d08\") " pod="kube-system/kube-proxy-prv6p"
	Nov 29 09:04:13 embed-certs-976238 kubelet[1438]: I1129 09:04:13.300963    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-prv6p" podStartSLOduration=2.300943045 podStartE2EDuration="2.300943045s" podCreationTimestamp="2025-11-29 09:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:13.30065997 +0000 UTC m=+8.143443193" watchObservedRunningTime="2025-11-29 09:04:13.300943045 +0000 UTC m=+8.143726259"
	Nov 29 09:04:13 embed-certs-976238 kubelet[1438]: I1129 09:04:13.321602    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-k5955" podStartSLOduration=2.321580902 podStartE2EDuration="2.321580902s" podCreationTimestamp="2025-11-29 09:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:13.321175977 +0000 UTC m=+8.163959191" watchObservedRunningTime="2025-11-29 09:04:13.321580902 +0000 UTC m=+8.164364113"
	Nov 29 09:04:23 embed-certs-976238 kubelet[1438]: I1129 09:04:23.274173    1438 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:04:23 embed-certs-976238 kubelet[1438]: I1129 09:04:23.377463    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4gkv\" (UniqueName: \"kubernetes.io/projected/c34076a4-1198-4240-b8f1-28d44891e684-kube-api-access-d4gkv\") pod \"coredns-66bc5c9577-sz2td\" (UID: \"c34076a4-1198-4240-b8f1-28d44891e684\") " pod="kube-system/coredns-66bc5c9577-sz2td"
	Nov 29 09:04:23 embed-certs-976238 kubelet[1438]: I1129 09:04:23.377546    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c0d65141-7c62-422c-9cb7-66594118ec4e-tmp\") pod \"storage-provisioner\" (UID: \"c0d65141-7c62-422c-9cb7-66594118ec4e\") " pod="kube-system/storage-provisioner"
	Nov 29 09:04:23 embed-certs-976238 kubelet[1438]: I1129 09:04:23.377587    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56czn\" (UniqueName: \"kubernetes.io/projected/c0d65141-7c62-422c-9cb7-66594118ec4e-kube-api-access-56czn\") pod \"storage-provisioner\" (UID: \"c0d65141-7c62-422c-9cb7-66594118ec4e\") " pod="kube-system/storage-provisioner"
	Nov 29 09:04:23 embed-certs-976238 kubelet[1438]: I1129 09:04:23.377617    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c34076a4-1198-4240-b8f1-28d44891e684-config-volume\") pod \"coredns-66bc5c9577-sz2td\" (UID: \"c34076a4-1198-4240-b8f1-28d44891e684\") " pod="kube-system/coredns-66bc5c9577-sz2td"
	Nov 29 09:04:24 embed-certs-976238 kubelet[1438]: I1129 09:04:24.336831    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sz2td" podStartSLOduration=13.336808932 podStartE2EDuration="13.336808932s" podCreationTimestamp="2025-11-29 09:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:24.336307843 +0000 UTC m=+19.179091072" watchObservedRunningTime="2025-11-29 09:04:24.336808932 +0000 UTC m=+19.179592146"
	Nov 29 09:04:24 embed-certs-976238 kubelet[1438]: I1129 09:04:24.347858    1438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.347836325 podStartE2EDuration="12.347836325s" podCreationTimestamp="2025-11-29 09:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:24.347315126 +0000 UTC m=+19.190098340" watchObservedRunningTime="2025-11-29 09:04:24.347836325 +0000 UTC m=+19.190619540"
	Nov 29 09:04:26 embed-certs-976238 kubelet[1438]: I1129 09:04:26.503877    1438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56hjj\" (UniqueName: \"kubernetes.io/projected/dc39d248-15e7-409d-be52-e01d5a094726-kube-api-access-56hjj\") pod \"busybox\" (UID: \"dc39d248-15e7-409d-be52-e01d5a094726\") " pod="default/busybox"
	
	
	==> storage-provisioner [d54b901347f406aa3719940a4658b89f3c2a83e1de281b1e3ab1a3b70f37b029] <==
	I1129 09:04:23.884995       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:04:23.887628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:23.893040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:04:23.893234       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:04:23.893437       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"479cd1c1-98ff-4960-9b6e-9cc6ae8a115c", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-976238_32762e67-079d-438e-8eb0-cd5ee2b5ce97 became leader
	I1129 09:04:23.893642       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-976238_32762e67-079d-438e-8eb0-cd5ee2b5ce97!
	W1129 09:04:23.905007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:23.910647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:04:23.994398       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-976238_32762e67-079d-438e-8eb0-cd5ee2b5ce97!
	W1129 09:04:25.913968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:25.920244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:27.924806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:27.929323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:29.932947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:29.937949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:31.940844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:31.945034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:33.948541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:33.956403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:35.960512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:35.971423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:37.975552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:38.091189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:40.095025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:40.100419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-976238 -n embed-certs-976238
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-976238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (14.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-357829 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a7187d53-caa5-4d82-a363-42dacbd45f01] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a7187d53-caa5-4d82-a363-42dacbd45f01] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.007013716s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-357829 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-357829
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-357829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19",
	        "Created": "2025-11-29T09:03:58.829078857Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 525127,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:03:58.876457486Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19/hostname",
	        "HostsPath": "/var/lib/docker/containers/05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19/hosts",
	        "LogPath": "/var/lib/docker/containers/05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19/05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19-json.log",
	        "Name": "/default-k8s-diff-port-357829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-357829:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-357829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19",
	                "LowerDir": "/var/lib/docker/overlay2/769eeeb85005464c69c79c6358da0e6738b1584b9f5c0a657dda7b63cc2652e4-init/diff:/var/lib/docker/overlay2/eb180691bce18b8d981b2d61ed0962851c615364ed77c18ff66d559424569005/diff",
	                "MergedDir": "/var/lib/docker/overlay2/769eeeb85005464c69c79c6358da0e6738b1584b9f5c0a657dda7b63cc2652e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/769eeeb85005464c69c79c6358da0e6738b1584b9f5c0a657dda7b63cc2652e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/769eeeb85005464c69c79c6358da0e6738b1584b9f5c0a657dda7b63cc2652e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-357829",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-357829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-357829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-357829",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-357829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d3783ab61f07fb536c578fe5694915165cec9448bb8b6b991ad6987f87f01ef0",
	            "SandboxKey": "/var/run/docker/netns/d3783ab61f07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-357829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f17ce87ca21659c5da3c274e1459137df3b8383021f2c5ec9c0cce59ba7e7b7c",
	                    "EndpointID": "f94e99d27baba5f3fb52c96f5af6417ce5e6509bebf3eb66f40c6165550e9014",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "46:49:5e:16:5e:f4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-357829",
	                        "05de3679451a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-357829 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-357829 logs -n 25: (1.561300797s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ pause   │ -p old-k8s-version-295154 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ unpause │ -p old-k8s-version-295154 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p old-k8s-version-295154                                                                                                                                                                                                                           │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p old-k8s-version-295154                                                                                                                                                                                                                           │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ start   │ -p embed-certs-976238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-976238           │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:04 UTC │
	│ image   │ no-preload-924441 image list --format=json                                                                                                                                                                                                          │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ pause   │ -p no-preload-924441 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ unpause │ -p no-preload-924441 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p no-preload-924441                                                                                                                                                                                                                                │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p no-preload-924441                                                                                                                                                                                                                                │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p disable-driver-mounts-286131                                                                                                                                                                                                                     │ disable-driver-mounts-286131 │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ start   │ -p default-k8s-diff-port-357829 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-357829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p cert-expiration-368536 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-368536       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:04 UTC │
	│ delete  │ -p cert-expiration-368536                                                                                                                                                                                                                           │ cert-expiration-368536       │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p newest-cni-106601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-806701    │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	│ start   │ -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-806701    │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ delete  │ -p kubernetes-upgrade-806701                                                                                                                                                                                                                        │ kubernetes-upgrade-806701    │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p auto-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-770004                  │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-106601 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ stop    │ -p newest-cni-106601 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-976238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-976238           │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-106601 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p newest-cni-106601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	│ stop    │ -p embed-certs-976238 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-976238           │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:04:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:04:41.406685  540002 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:04:41.406896  540002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:41.406909  540002 out.go:374] Setting ErrFile to fd 2...
	I1129 09:04:41.406915  540002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:41.407223  540002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 09:04:41.407865  540002 out.go:368] Setting JSON to false
	I1129 09:04:41.409558  540002 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6425,"bootTime":1764400656,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:04:41.409641  540002 start.go:143] virtualization: kvm guest
	I1129 09:04:41.411942  540002 out.go:179] * [newest-cni-106601] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:04:41.413320  540002 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:04:41.413316  540002 notify.go:221] Checking for updates...
	I1129 09:04:41.414665  540002 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:04:41.415856  540002 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:04:41.417157  540002 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 09:04:41.418186  540002 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:04:41.419933  540002 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:04:41.421880  540002 config.go:182] Loaded profile config "newest-cni-106601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:41.422773  540002 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:04:41.451869  540002 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:04:41.452091  540002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:04:41.525075  540002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:04:41.512630655 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:04:41.525184  540002 docker.go:319] overlay module found
	I1129 09:04:41.527287  540002 out.go:179] * Using the docker driver based on existing profile
	I1129 09:04:41.528414  540002 start.go:309] selected driver: docker
	I1129 09:04:41.528429  540002 start.go:927] validating driver "docker" against &{Name:newest-cni-106601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-106601 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:04:41.528547  540002 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:04:41.529197  540002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:04:41.609608  540002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:04:41.59502198 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:04:41.610075  540002 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:04:41.610126  540002 cni.go:84] Creating CNI manager for ""
	I1129 09:04:41.610204  540002 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:04:41.610267  540002 start.go:353] cluster config:
	{Name:newest-cni-106601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-106601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:04:41.611758  540002 out.go:179] * Starting "newest-cni-106601" primary control-plane node in "newest-cni-106601" cluster
	I1129 09:04:41.612743  540002 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:04:41.613866  540002 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:04:38.189697  535908 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-770004:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.71853773s)
	I1129 09:04:38.189726  535908 kic.go:203] duration metric: took 4.718712191s to extract preloaded images to volume ...
	W1129 09:04:38.189899  535908 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:04:38.189945  535908 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:04:38.189986  535908 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:04:38.290030  535908 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-770004 --name auto-770004 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-770004 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-770004 --network auto-770004 --ip 192.168.85.2 --volume auto-770004:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:04:38.646034  535908 cli_runner.go:164] Run: docker container inspect auto-770004 --format={{.State.Running}}
	I1129 09:04:38.667719  535908 cli_runner.go:164] Run: docker container inspect auto-770004 --format={{.State.Status}}
	I1129 09:04:38.692064  535908 cli_runner.go:164] Run: docker exec auto-770004 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:04:38.750534  535908 oci.go:144] the created container "auto-770004" has a running status.
	I1129 09:04:38.750568  535908 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa...
	I1129 09:04:38.989860  535908 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:04:39.032952  535908 cli_runner.go:164] Run: docker container inspect auto-770004 --format={{.State.Status}}
	I1129 09:04:39.060809  535908 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:04:39.060833  535908 kic_runner.go:114] Args: [docker exec --privileged auto-770004 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:04:39.124616  535908 cli_runner.go:164] Run: docker container inspect auto-770004 --format={{.State.Status}}
	I1129 09:04:39.150282  535908 machine.go:94] provisionDockerMachine start ...
	I1129 09:04:39.150391  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:39.171969  535908 main.go:143] libmachine: Using SSH client type: native
	I1129 09:04:39.172233  535908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1129 09:04:39.172251  535908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:04:39.326141  535908 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-770004
	
	I1129 09:04:39.326169  535908 ubuntu.go:182] provisioning hostname "auto-770004"
	I1129 09:04:39.326224  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:39.351481  535908 main.go:143] libmachine: Using SSH client type: native
	I1129 09:04:39.351888  535908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1129 09:04:39.351913  535908 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-770004 && echo "auto-770004" | sudo tee /etc/hostname
	I1129 09:04:39.523428  535908 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-770004
	
	I1129 09:04:39.523533  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:39.546768  535908 main.go:143] libmachine: Using SSH client type: native
	I1129 09:04:39.547089  535908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1129 09:04:39.547118  535908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-770004' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-770004/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-770004' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:04:39.699176  535908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:04:39.699211  535908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-255825/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-255825/.minikube}
	I1129 09:04:39.699242  535908 ubuntu.go:190] setting up certificates
	I1129 09:04:39.699256  535908 provision.go:84] configureAuth start
	I1129 09:04:39.699338  535908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-770004
	I1129 09:04:39.725652  535908 provision.go:143] copyHostCerts
	I1129 09:04:39.725713  535908 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem, removing ...
	I1129 09:04:39.725723  535908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem
	I1129 09:04:39.725826  535908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem (1679 bytes)
	I1129 09:04:39.725973  535908 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem, removing ...
	I1129 09:04:39.725988  535908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem
	I1129 09:04:39.726034  535908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem (1078 bytes)
	I1129 09:04:39.726129  535908 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem, removing ...
	I1129 09:04:39.726140  535908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem
	I1129 09:04:39.726178  535908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem (1123 bytes)
	I1129 09:04:39.726282  535908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem org=jenkins.auto-770004 san=[127.0.0.1 192.168.85.2 auto-770004 localhost minikube]
	I1129 09:04:39.845319  535908 provision.go:177] copyRemoteCerts
	I1129 09:04:39.845397  535908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:04:39.845449  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:39.868062  535908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa Username:docker}
	I1129 09:04:39.976936  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1129 09:04:39.997143  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:04:40.016432  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:04:40.038786  535908 provision.go:87] duration metric: took 339.511821ms to configureAuth
	I1129 09:04:40.038817  535908 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:04:40.039047  535908 config.go:182] Loaded profile config "auto-770004": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:40.039068  535908 machine.go:97] duration metric: took 888.764096ms to provisionDockerMachine
	I1129 09:04:40.039080  535908 client.go:176] duration metric: took 7.110115816s to LocalClient.Create
	I1129 09:04:40.039108  535908 start.go:167] duration metric: took 7.110183776s to libmachine.API.Create "auto-770004"
	I1129 09:04:40.039116  535908 start.go:293] postStartSetup for "auto-770004" (driver="docker")
	I1129 09:04:40.039128  535908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:04:40.039188  535908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:04:40.039243  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:40.063572  535908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa Username:docker}
	I1129 09:04:40.173901  535908 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:04:40.177658  535908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:04:40.177689  535908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:04:40.177703  535908 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/addons for local assets ...
	I1129 09:04:40.177795  535908 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/files for local assets ...
	I1129 09:04:40.177915  535908 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem -> 2594832.pem in /etc/ssl/certs
	I1129 09:04:40.178047  535908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:04:40.187514  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:04:40.213157  535908 start.go:296] duration metric: took 174.012404ms for postStartSetup
	I1129 09:04:40.213579  535908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-770004
	I1129 09:04:40.233634  535908 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/config.json ...
	I1129 09:04:40.234009  535908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:04:40.234050  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:40.259627  535908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa Username:docker}
	I1129 09:04:40.363903  535908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:04:40.371115  535908 start.go:128] duration metric: took 7.44457847s to createHost
	I1129 09:04:40.371144  535908 start.go:83] releasing machines lock for "auto-770004", held for 7.444700159s
	I1129 09:04:40.371228  535908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-770004
	I1129 09:04:40.394654  535908 ssh_runner.go:195] Run: cat /version.json
	I1129 09:04:40.394714  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:40.394721  535908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:04:40.394822  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:40.416903  535908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa Username:docker}
	I1129 09:04:40.417398  535908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa Username:docker}
	I1129 09:04:40.519871  535908 ssh_runner.go:195] Run: systemctl --version
	I1129 09:04:40.582471  535908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:04:40.587256  535908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:04:40.587318  535908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:04:40.630527  535908 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:04:40.630556  535908 start.go:496] detecting cgroup driver to use...
	I1129 09:04:40.630589  535908 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:04:40.630635  535908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:04:40.648104  535908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:04:40.672777  535908 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:04:40.672843  535908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:04:40.694694  535908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:04:40.713598  535908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:04:40.801300  535908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:04:40.889574  535908 docker.go:234] disabling docker service ...
	I1129 09:04:40.889634  535908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:04:40.913072  535908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:04:40.927112  535908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:04:41.012009  535908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:04:41.099449  535908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:04:41.113880  535908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:04:41.129786  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:04:41.142654  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:04:41.152132  535908 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1129 09:04:41.152206  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1129 09:04:41.161918  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:04:41.171727  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:04:41.181216  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:04:41.190369  535908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:04:41.198895  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:04:41.208075  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:04:41.217523  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:04:41.227378  535908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:04:41.237589  535908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:04:41.248203  535908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:04:41.370706  535908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:04:41.510398  535908 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:04:41.510512  535908 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:04:41.515569  535908 start.go:564] Will wait 60s for crictl version
	I1129 09:04:41.515631  535908 ssh_runner.go:195] Run: which crictl
	I1129 09:04:41.521512  535908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:04:41.562942  535908 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:04:41.563038  535908 ssh_runner.go:195] Run: containerd --version
	I1129 09:04:41.595112  535908 ssh_runner.go:195] Run: containerd --version
	I1129 09:04:41.630089  535908 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:04:41.614876  540002 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:04:41.614918  540002 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1129 09:04:41.614944  540002 cache.go:65] Caching tarball of preloaded images
	I1129 09:04:41.614990  540002 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:04:41.615046  540002 preload.go:238] Found /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1129 09:04:41.615057  540002 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1129 09:04:41.615225  540002 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/config.json ...
	I1129 09:04:41.641696  540002 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:04:41.641721  540002 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:04:41.641766  540002 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:04:41.641806  540002 start.go:360] acquireMachinesLock for newest-cni-106601: {Name:mk30620cdf9d2fed47ccfe496a0ec3101f264b78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:04:41.641883  540002 start.go:364] duration metric: took 47.588µs to acquireMachinesLock for "newest-cni-106601"
	I1129 09:04:41.641909  540002 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:04:41.641916  540002 fix.go:54] fixHost starting: 
	I1129 09:04:41.642228  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:41.664422  540002 fix.go:112] recreateIfNeeded on newest-cni-106601: state=Stopped err=<nil>
	W1129 09:04:41.664460  540002 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:04:41.631148  535908 cli_runner.go:164] Run: docker network inspect auto-770004 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:04:41.656328  535908 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 09:04:41.661185  535908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:04:41.674264  535908 kubeadm.go:884] updating cluster {Name:auto-770004 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-770004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:04:41.674413  535908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:04:41.674477  535908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:04:41.707720  535908 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:04:41.707768  535908 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:04:41.707835  535908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:04:41.751908  535908 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:04:41.751933  535908 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:04:41.751942  535908 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1129 09:04:41.752060  535908 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-770004 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-770004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:04:41.752134  535908 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:04:41.783606  535908 cni.go:84] Creating CNI manager for ""
	I1129 09:04:41.783646  535908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:04:41.783669  535908 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:04:41.783781  535908 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-770004 NodeName:auto-770004 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:04:41.783973  535908 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-770004"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:04:41.784072  535908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:04:41.795810  535908 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:04:41.795888  535908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:04:41.807186  535908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1129 09:04:41.829936  535908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:04:41.846472  535908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1129 09:04:41.861636  535908 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:04:41.866145  535908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:04:41.877099  535908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:04:41.987642  535908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:04:42.013333  535908 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004 for IP: 192.168.85.2
	I1129 09:04:42.013358  535908 certs.go:195] generating shared ca certs ...
	I1129 09:04:42.013381  535908 certs.go:227] acquiring lock for ca certs: {Name:mk5e6bcae0a6944966b241f3c6197a472703c991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.013560  535908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key
	I1129 09:04:42.013620  535908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key
	I1129 09:04:42.013636  535908 certs.go:257] generating profile certs ...
	I1129 09:04:42.013707  535908 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/client.key
	I1129 09:04:42.013722  535908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/client.crt with IP's: []
	I1129 09:04:42.109549  535908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/client.crt ...
	I1129 09:04:42.109586  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/client.crt: {Name:mkdcba972ae9b889a10497b78b0dc5d8c10c2bfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.109805  535908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/client.key ...
	I1129 09:04:42.109825  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/client.key: {Name:mkf7487fe7304f26b8555354153479495769bc80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.110389  535908 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.key.e745c512
	I1129 09:04:42.110418  535908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.crt.e745c512 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1129 09:04:42.238544  535908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.crt.e745c512 ...
	I1129 09:04:42.238581  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.crt.e745c512: {Name:mk74a351aab7f154207df9b146b6f8aea1c9ceaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.238784  535908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.key.e745c512 ...
	I1129 09:04:42.238806  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.key.e745c512: {Name:mkac3795cb8d46fbcc479466786f252fce972f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.238920  535908 certs.go:382] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.crt.e745c512 -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.crt
	I1129 09:04:42.239048  535908 certs.go:386] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.key.e745c512 -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.key
	I1129 09:04:42.239134  535908 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.key
	I1129 09:04:42.239158  535908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.crt with IP's: []
	I1129 09:04:42.313498  535908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.crt ...
	I1129 09:04:42.313530  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.crt: {Name:mke6e95b0965e72c9ea4f083e3554e515a0c98ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.313728  535908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.key ...
	I1129 09:04:42.313766  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.key: {Name:mk27d0487ae43c58adb88322643701996dcf764e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.313984  535908 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem (1338 bytes)
	W1129 09:04:42.314034  535908 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483_empty.pem, impossibly tiny 0 bytes
	I1129 09:04:42.314049  535908 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:04:42.314090  535908 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:04:42.314134  535908 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:04:42.314169  535908 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem (1679 bytes)
	I1129 09:04:42.314234  535908 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:04:42.314900  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:04:42.334704  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:04:42.354378  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:04:42.372961  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:04:42.390309  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1129 09:04:42.407401  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:04:42.424315  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:04:42.440905  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:04:42.457837  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /usr/share/ca-certificates/2594832.pem (1708 bytes)
	I1129 09:04:42.477679  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:04:42.494573  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem --> /usr/share/ca-certificates/259483.pem (1338 bytes)
	I1129 09:04:42.511783  535908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:04:42.523853  535908 ssh_runner.go:195] Run: openssl version
	I1129 09:04:42.529876  535908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:04:42.537772  535908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:04:42.541322  535908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:04:42.541377  535908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:04:42.575696  535908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:04:42.583979  535908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259483.pem && ln -fs /usr/share/ca-certificates/259483.pem /etc/ssl/certs/259483.pem"
	I1129 09:04:42.592956  535908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259483.pem
	I1129 09:04:42.596356  535908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:35 /usr/share/ca-certificates/259483.pem
	I1129 09:04:42.596406  535908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259483.pem
	I1129 09:04:42.630672  535908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259483.pem /etc/ssl/certs/51391683.0"
	I1129 09:04:42.638924  535908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2594832.pem && ln -fs /usr/share/ca-certificates/2594832.pem /etc/ssl/certs/2594832.pem"
	I1129 09:04:42.647015  535908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2594832.pem
	I1129 09:04:42.650506  535908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:35 /usr/share/ca-certificates/2594832.pem
	I1129 09:04:42.650553  535908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2594832.pem
	I1129 09:04:42.685279  535908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2594832.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:04:42.693682  535908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:04:42.697222  535908 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:04:42.697289  535908 kubeadm.go:401] StartCluster: {Name:auto-770004 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-770004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:04:42.697392  535908 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:04:42.697452  535908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:04:42.723391  535908 cri.go:89] found id: ""
	I1129 09:04:42.723441  535908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:04:42.731192  535908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:04:42.738974  535908 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:04:42.739026  535908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:04:42.746563  535908 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:04:42.746585  535908 kubeadm.go:158] found existing configuration files:
	
	I1129 09:04:42.746623  535908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:04:42.753939  535908 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:04:42.753983  535908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:04:42.761213  535908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:04:42.768329  535908 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:04:42.768378  535908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:04:42.775477  535908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:04:42.782500  535908 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:04:42.782548  535908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:04:42.789797  535908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:04:42.797258  535908 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:04:42.797292  535908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:04:42.804548  535908 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:04:42.841791  535908 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:04:42.841871  535908 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:04:42.873015  535908 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:04:42.873127  535908 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:04:42.873194  535908 kubeadm.go:319] OS: Linux
	I1129 09:04:42.873294  535908 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:04:42.873390  535908 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:04:42.873459  535908 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:04:42.873554  535908 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:04:42.873626  535908 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:04:42.873705  535908 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:04:42.873796  535908 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:04:42.873900  535908 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:04:42.933116  535908 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:04:42.933241  535908 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:04:42.933401  535908 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:04:42.938520  535908 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:04:41.665897  540002 out.go:252] * Restarting existing docker container for "newest-cni-106601" ...
	I1129 09:04:41.665963  540002 cli_runner.go:164] Run: docker start newest-cni-106601
	I1129 09:04:41.965100  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:41.988619  540002 kic.go:430] container "newest-cni-106601" state is running.
	I1129 09:04:41.989265  540002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-106601
	I1129 09:04:42.013932  540002 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/config.json ...
	I1129 09:04:42.014319  540002 machine.go:94] provisionDockerMachine start ...
	I1129 09:04:42.014386  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:42.038645  540002 main.go:143] libmachine: Using SSH client type: native
	I1129 09:04:42.039099  540002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1129 09:04:42.039132  540002 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:04:42.039917  540002 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42350->127.0.0.1:33098: read: connection reset by peer
	I1129 09:04:45.186201  540002 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-106601
	
	I1129 09:04:45.186239  540002 ubuntu.go:182] provisioning hostname "newest-cni-106601"
	I1129 09:04:45.186311  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:45.205701  540002 main.go:143] libmachine: Using SSH client type: native
	I1129 09:04:45.205946  540002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1129 09:04:45.205975  540002 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-106601 && echo "newest-cni-106601" | sudo tee /etc/hostname
	I1129 09:04:45.365797  540002 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-106601
	
	I1129 09:04:45.365893  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:45.385791  540002 main.go:143] libmachine: Using SSH client type: native
	I1129 09:04:45.386026  540002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1129 09:04:45.386043  540002 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-106601' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-106601/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-106601' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:04:45.533770  540002 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:04:45.533805  540002 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-255825/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-255825/.minikube}
	I1129 09:04:45.533869  540002 ubuntu.go:190] setting up certificates
	I1129 09:04:45.533886  540002 provision.go:84] configureAuth start
	I1129 09:04:45.533967  540002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-106601
	I1129 09:04:45.552789  540002 provision.go:143] copyHostCerts
	I1129 09:04:45.552863  540002 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem, removing ...
	I1129 09:04:45.552880  540002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem
	I1129 09:04:45.552963  540002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem (1078 bytes)
	I1129 09:04:45.553084  540002 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem, removing ...
	I1129 09:04:45.553098  540002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem
	I1129 09:04:45.553150  540002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem (1123 bytes)
	I1129 09:04:45.553239  540002 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem, removing ...
	I1129 09:04:45.553250  540002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem
	I1129 09:04:45.553292  540002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem (1679 bytes)
	I1129 09:04:45.553451  540002 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem org=jenkins.newest-cni-106601 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-106601]
	I1129 09:04:45.643862  540002 provision.go:177] copyRemoteCerts
	I1129 09:04:45.643930  540002 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:04:45.643980  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:45.662982  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:45.765523  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:04:45.783749  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:04:45.801293  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:04:45.819586  540002 provision.go:87] duration metric: took 285.680751ms to configureAuth
	I1129 09:04:45.819618  540002 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:04:45.819866  540002 config.go:182] Loaded profile config "newest-cni-106601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:45.819880  540002 machine.go:97] duration metric: took 3.805545616s to provisionDockerMachine
	I1129 09:04:45.819890  540002 start.go:293] postStartSetup for "newest-cni-106601" (driver="docker")
	I1129 09:04:45.819901  540002 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:04:45.819955  540002 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:04:45.820002  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:45.837708  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:45.942368  540002 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:04:45.946448  540002 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:04:45.946494  540002 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:04:45.946506  540002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/addons for local assets ...
	I1129 09:04:45.946557  540002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/files for local assets ...
	I1129 09:04:45.946635  540002 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem -> 2594832.pem in /etc/ssl/certs
	I1129 09:04:45.946727  540002 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:04:45.954934  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:04:45.972792  540002 start.go:296] duration metric: took 152.887271ms for postStartSetup
	I1129 09:04:45.972880  540002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:04:45.972936  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:45.990527  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:46.090401  540002 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:04:46.095289  540002 fix.go:56] duration metric: took 4.453365836s for fixHost
	I1129 09:04:46.095323  540002 start.go:83] releasing machines lock for "newest-cni-106601", held for 4.453422843s
	I1129 09:04:46.095414  540002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-106601
	I1129 09:04:46.114930  540002 ssh_runner.go:195] Run: cat /version.json
	I1129 09:04:46.114989  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:46.114986  540002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:04:46.115076  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:46.133328  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:46.135631  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:46.233362  540002 ssh_runner.go:195] Run: systemctl --version
	I1129 09:04:46.287472  540002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:04:46.292631  540002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:04:46.292693  540002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:04:46.301125  540002 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:04:46.301154  540002 start.go:496] detecting cgroup driver to use...
	I1129 09:04:46.301189  540002 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:04:46.301232  540002 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:04:46.318115  540002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:04:46.331824  540002 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:04:46.331873  540002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:04:46.347455  540002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:04:46.360693  540002 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:04:42.940384  535908 out.go:252]   - Generating certificates and keys ...
	I1129 09:04:42.940483  535908 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:04:42.940576  535908 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:04:43.280978  535908 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:04:43.639954  535908 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:04:43.699103  535908 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:04:43.989174  535908 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:04:44.307451  535908 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:04:44.307572  535908 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-770004 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:04:44.419421  535908 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:04:44.419602  535908 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-770004 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:04:44.676972  535908 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:04:44.937468  535908 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:04:45.006216  535908 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:04:45.006310  535908 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:04:45.361869  535908 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:04:45.917661  535908 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:04:46.258293  535908 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:04:46.481004  535908 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:04:46.701923  535908 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:04:46.703182  535908 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:04:46.707422  535908 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:04:46.442096  540002 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:04:46.529258  540002 docker.go:234] disabling docker service ...
	I1129 09:04:46.529338  540002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:04:46.545918  540002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:04:46.561168  540002 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:04:46.647463  540002 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:04:46.751237  540002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:04:46.765273  540002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:04:46.783588  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:04:46.792988  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:04:46.802179  540002 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1129 09:04:46.802267  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1129 09:04:46.811443  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:04:46.820603  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:04:46.830371  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:04:46.840054  540002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:04:46.848884  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:04:46.859237  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:04:46.869258  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:04:46.880391  540002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:04:46.888972  540002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:04:46.896755  540002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:04:46.980432  540002 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:04:47.100069  540002 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:04:47.100149  540002 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:04:47.105820  540002 start.go:564] Will wait 60s for crictl version
	I1129 09:04:47.105896  540002 ssh_runner.go:195] Run: which crictl
	I1129 09:04:47.110369  540002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:04:47.138327  540002 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:04:47.138394  540002 ssh_runner.go:195] Run: containerd --version
	I1129 09:04:47.161808  540002 ssh_runner.go:195] Run: containerd --version
	I1129 09:04:47.187135  540002 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:04:47.188396  540002 cli_runner.go:164] Run: docker network inspect newest-cni-106601 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:04:47.207860  540002 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1129 09:04:47.213033  540002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:04:47.226913  540002 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1129 09:04:46.709619  535908 out.go:252]   - Booting up control plane ...
	I1129 09:04:46.709750  535908 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:04:46.709888  535908 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:04:46.710023  535908 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:04:46.727576  535908 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:04:46.727799  535908 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:04:46.735258  535908 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:04:46.735534  535908 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:04:46.735617  535908 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:04:46.847351  535908 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:04:46.847537  535908 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:04:47.349161  535908 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.922351ms
	I1129 09:04:47.352603  535908 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:04:47.352872  535908 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1129 09:04:47.353024  535908 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:04:47.353154  535908 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:04:47.228923  540002 kubeadm.go:884] updating cluster {Name:newest-cni-106601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-106601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:04:47.229104  540002 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:04:47.229178  540002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:04:47.257766  540002 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:04:47.257796  540002 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:04:47.257878  540002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:04:47.287866  540002 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:04:47.287892  540002 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:04:47.287902  540002 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1129 09:04:47.288040  540002 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-106601 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-106601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:04:47.288118  540002 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:04:47.316701  540002 cni.go:84] Creating CNI manager for ""
	I1129 09:04:47.316750  540002 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:04:47.316770  540002 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1129 09:04:47.316794  540002 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-106601 NodeName:newest-cni-106601 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:04:47.316913  540002 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-106601"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:04:47.316982  540002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:04:47.325866  540002 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:04:47.325934  540002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:04:47.334027  540002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1129 09:04:47.348124  540002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:04:47.363062  540002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1129 09:04:47.376378  540002 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:04:47.380213  540002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:04:47.390043  540002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:04:47.473518  540002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:04:47.502720  540002 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601 for IP: 192.168.94.2
	I1129 09:04:47.502777  540002 certs.go:195] generating shared ca certs ...
	I1129 09:04:47.502800  540002 certs.go:227] acquiring lock for ca certs: {Name:mk5e6bcae0a6944966b241f3c6197a472703c991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:47.502962  540002 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key
	I1129 09:04:47.503018  540002 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key
	I1129 09:04:47.503031  540002 certs.go:257] generating profile certs ...
	I1129 09:04:47.503139  540002 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/client.key
	I1129 09:04:47.503205  540002 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/apiserver.key.8f52e5f3
	I1129 09:04:47.503264  540002 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/proxy-client.key
	I1129 09:04:47.503407  540002 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem (1338 bytes)
	W1129 09:04:47.503447  540002 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483_empty.pem, impossibly tiny 0 bytes
	I1129 09:04:47.503458  540002 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:04:47.503487  540002 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:04:47.503517  540002 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:04:47.503548  540002 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem (1679 bytes)
	I1129 09:04:47.503603  540002 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:04:47.504327  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:04:47.524063  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:04:47.543390  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:04:47.566567  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:04:47.598366  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:04:47.631168  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:04:47.657875  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:04:47.682526  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:04:47.707637  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:04:47.734081  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem --> /usr/share/ca-certificates/259483.pem (1338 bytes)
	I1129 09:04:47.757834  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /usr/share/ca-certificates/2594832.pem (1708 bytes)
	I1129 09:04:47.782247  540002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:04:47.802216  540002 ssh_runner.go:195] Run: openssl version
	I1129 09:04:47.812398  540002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:04:47.825424  540002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:04:47.832798  540002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:04:47.832883  540002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:04:47.884801  540002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:04:47.897659  540002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259483.pem && ln -fs /usr/share/ca-certificates/259483.pem /etc/ssl/certs/259483.pem"
	I1129 09:04:47.908023  540002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259483.pem
	I1129 09:04:47.912703  540002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:35 /usr/share/ca-certificates/259483.pem
	I1129 09:04:47.912801  540002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259483.pem
	I1129 09:04:47.950815  540002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259483.pem /etc/ssl/certs/51391683.0"
	I1129 09:04:47.961598  540002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2594832.pem && ln -fs /usr/share/ca-certificates/2594832.pem /etc/ssl/certs/2594832.pem"
	I1129 09:04:47.970983  540002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2594832.pem
	I1129 09:04:47.975318  540002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:35 /usr/share/ca-certificates/2594832.pem
	I1129 09:04:47.975382  540002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2594832.pem
	I1129 09:04:48.011787  540002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2594832.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:04:48.020882  540002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:04:48.024937  540002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:04:48.077023  540002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:04:48.142309  540002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:04:48.207068  540002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:04:48.270185  540002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:04:48.341824  540002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:04:48.432505  540002 kubeadm.go:401] StartCluster: {Name:newest-cni-106601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-106601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:04:48.432687  540002 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:04:48.432837  540002 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:04:48.503789  540002 cri.go:89] found id: "1a34f3692687524428db0a630cd5941a36ca50fe3367ce64034d94caedadde8c"
	I1129 09:04:48.503833  540002 cri.go:89] found id: "9b642e260eb4b5d26d7f23ed36f23236cb26d277be6e7cfb9d24ed67d7106b31"
	I1129 09:04:48.503839  540002 cri.go:89] found id: "12da39af965a594afb3be832e4fed048c25f5674090740c7e705c538a56eae17"
	I1129 09:04:48.503844  540002 cri.go:89] found id: "2331d62583ac9f10550bb4eaba3340dab40c03f5218c02d08f64882f0b1c4efd"
	I1129 09:04:48.503848  540002 cri.go:89] found id: "0a0709f3a32f1172a488e884f16bb33e9710f74cb127ec39237d993fb318da36"
	I1129 09:04:48.503854  540002 cri.go:89] found id: "48611e4305372052385ada3c5cf83f207932d786f0e90456beba3b8d51dbbb05"
	I1129 09:04:48.503864  540002 cri.go:89] found id: "0381cce8327708e526ae49357c2734ae8e40ce6de1ebbdd6e6398ba6f1d47e24"
	I1129 09:04:48.503868  540002 cri.go:89] found id: "37e3444d9c250591ff98cfb50f85bcbc6ba13fdc0ce437b26555cf7379276ffb"
	I1129 09:04:48.503873  540002 cri.go:89] found id: "2f3ba633c7f133d99d8b4712f9a6b313e59011b9722432913fb9d0c1235c9549"
	I1129 09:04:48.503883  540002 cri.go:89] found id: "b10201d00508a9df4afa664712a9150d2cf98e3382751ec1f8ef0e585560090d"
	I1129 09:04:48.503887  540002 cri.go:89] found id: ""
	I1129 09:04:48.503953  540002 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1129 09:04:48.541414  540002 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"12da39af965a594afb3be832e4fed048c25f5674090740c7e705c538a56eae17","pid":965,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12da39af965a594afb3be832e4fed048c25f5674090740c7e705c538a56eae17","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12da39af965a594afb3be832e4fed048c25f5674090740c7e705c538a56eae17/rootfs","created":"2025-11-29T09:04:48.319264039Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"ce95e7374b8278188ca24b99d033fc80e6e2c033e081e21d65077346a5cca7b1","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ec937ace912df6c1bba8b2956c12b573"},"owner":"root"},{"ociVersion":"1.2.1","id":"1a34f3692687524428
db0a630cd5941a36ca50fe3367ce64034d94caedadde8c","pid":980,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a34f3692687524428db0a630cd5941a36ca50fe3367ce64034d94caedadde8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a34f3692687524428db0a630cd5941a36ca50fe3367ce64034d94caedadde8c/rootfs","created":"2025-11-29T09:04:48.332463796Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2aff9f1628c54092dcee2cd221e4eb70"},"owner":"root"},{"ociVersion":"1.2.1","id":"2331d62583ac9f10550bb4eaba3340dab40c03f5218c02d08f64882f0b1c4efd","pid":916,"status":"running","
bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2331d62583ac9f10550bb4eaba3340dab40c03f5218c02d08f64882f0b1c4efd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2331d62583ac9f10550bb4eaba3340dab40c03f5218c02d08f64882f0b1c4efd/rootfs","created":"2025-11-29T09:04:48.259240777Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbcebe","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3a3159ecee55ca692de91698a24fc36e"},"owner":"root"},{"ociVersion":"1.2.1","id":"4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875","pid":862,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875","rootfs
":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875/rootfs","created":"2025-11-29T09:04:48.143996691Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-106601_2aff9f1628c54092dcee2cd221e4eb70","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2aff9f1628c54092dcee2cd221e4eb70"},"owner":"root"},{"ociVersion":"1.2.1","id":"8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbc
ebe","pid":793,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbcebe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbcebe/rootfs","created":"2025-11-29T09:04:48.097894402Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbcebe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-106601_3a3159ecee55ca692de91698a24fc36e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3a3159ec
ee55ca692de91698a24fc36e"},"owner":"root"},{"ociVersion":"1.2.1","id":"9b642e260eb4b5d26d7f23ed36f23236cb26d277be6e7cfb9d24ed67d7106b31","pid":971,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b642e260eb4b5d26d7f23ed36f23236cb26d277be6e7cfb9d24ed67d7106b31","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b642e260eb4b5d26d7f23ed36f23236cb26d277be6e7cfb9d24ed67d7106b31/rootfs","created":"2025-11-29T09:04:48.347690228Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"589cd80f21a21d0fdb9074d648368f4c"},"owner":"root"},{"ociVersion":"1.2.1","id":"ce95e7374b8278188ca24b99d033fc80e6
e2c033e081e21d65077346a5cca7b1","pid":855,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce95e7374b8278188ca24b99d033fc80e6e2c033e081e21d65077346a5cca7b1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce95e7374b8278188ca24b99d033fc80e6e2c033e081e21d65077346a5cca7b1/rootfs","created":"2025-11-29T09:04:48.136298178Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"ce95e7374b8278188ca24b99d033fc80e6e2c033e081e21d65077346a5cca7b1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-106601_ec937ace912df6c1bba8b2956c12b573","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-sy
stem","io.kubernetes.cri.sandbox-uid":"ec937ace912df6c1bba8b2956c12b573"},"owner":"root"},{"ociVersion":"1.2.1","id":"d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77","pid":866,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77/rootfs","created":"2025-11-29T09:04:48.153663888Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-106601_589cd80f21a21d0fdb9074d648368f4c","
io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"589cd80f21a21d0fdb9074d648368f4c"},"owner":"root"}]
	I1129 09:04:48.541609  540002 cri.go:126] list returned 8 containers
	I1129 09:04:48.541622  540002 cri.go:129] container: {ID:12da39af965a594afb3be832e4fed048c25f5674090740c7e705c538a56eae17 Status:running}
	I1129 09:04:48.541656  540002 cri.go:135] skipping {12da39af965a594afb3be832e4fed048c25f5674090740c7e705c538a56eae17 running}: state = "running", want "paused"
	I1129 09:04:48.541668  540002 cri.go:129] container: {ID:1a34f3692687524428db0a630cd5941a36ca50fe3367ce64034d94caedadde8c Status:running}
	I1129 09:04:48.541675  540002 cri.go:135] skipping {1a34f3692687524428db0a630cd5941a36ca50fe3367ce64034d94caedadde8c running}: state = "running", want "paused"
	I1129 09:04:48.541682  540002 cri.go:129] container: {ID:2331d62583ac9f10550bb4eaba3340dab40c03f5218c02d08f64882f0b1c4efd Status:running}
	I1129 09:04:48.541690  540002 cri.go:135] skipping {2331d62583ac9f10550bb4eaba3340dab40c03f5218c02d08f64882f0b1c4efd running}: state = "running", want "paused"
	I1129 09:04:48.541696  540002 cri.go:129] container: {ID:4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875 Status:running}
	I1129 09:04:48.541705  540002 cri.go:131] skipping 4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875 - not in ps
	I1129 09:04:48.541710  540002 cri.go:129] container: {ID:8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbcebe Status:running}
	I1129 09:04:48.541715  540002 cri.go:131] skipping 8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbcebe - not in ps
	I1129 09:04:48.541721  540002 cri.go:129] container: {ID:9b642e260eb4b5d26d7f23ed36f23236cb26d277be6e7cfb9d24ed67d7106b31 Status:running}
	I1129 09:04:48.541744  540002 cri.go:135] skipping {9b642e260eb4b5d26d7f23ed36f23236cb26d277be6e7cfb9d24ed67d7106b31 running}: state = "running", want "paused"
	I1129 09:04:48.541755  540002 cri.go:129] container: {ID:ce95e7374b8278188ca24b99d033fc80e6e2c033e081e21d65077346a5cca7b1 Status:running}
	I1129 09:04:48.541763  540002 cri.go:131] skipping ce95e7374b8278188ca24b99d033fc80e6e2c033e081e21d65077346a5cca7b1 - not in ps
	I1129 09:04:48.541767  540002 cri.go:129] container: {ID:d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77 Status:running}
	I1129 09:04:48.541774  540002 cri.go:131] skipping d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77 - not in ps
	I1129 09:04:48.541826  540002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:04:48.561896  540002 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:04:48.561923  540002 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:04:48.561971  540002 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:04:48.586587  540002 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:04:48.588263  540002 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-106601" does not appear in /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:04:48.589069  540002 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-255825/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-106601" cluster setting kubeconfig missing "newest-cni-106601" context setting]
	I1129 09:04:48.590377  540002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:48.595894  540002 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:04:48.610984  540002 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1129 09:04:48.611158  540002 kubeadm.go:602] duration metric: took 49.225267ms to restartPrimaryControlPlane
	I1129 09:04:48.611205  540002 kubeadm.go:403] duration metric: took 178.728899ms to StartCluster
	I1129 09:04:48.611229  540002 settings.go:142] acquiring lock: {Name:mk6dbed29e5e99d89b1cbbd9e561d8f8791ae9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:48.611308  540002 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:04:48.613488  540002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:48.613783  540002 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:04:48.614053  540002 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:04:48.614159  540002 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-106601"
	I1129 09:04:48.614179  540002 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-106601"
	W1129 09:04:48.614188  540002 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:04:48.614219  540002 host.go:66] Checking if "newest-cni-106601" exists ...
	I1129 09:04:48.614276  540002 config.go:182] Loaded profile config "newest-cni-106601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:48.614331  540002 addons.go:70] Setting default-storageclass=true in profile "newest-cni-106601"
	I1129 09:04:48.614345  540002 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-106601"
	I1129 09:04:48.614641  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:48.614749  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:48.614890  540002 addons.go:70] Setting dashboard=true in profile "newest-cni-106601"
	I1129 09:04:48.614905  540002 addons.go:70] Setting metrics-server=true in profile "newest-cni-106601"
	I1129 09:04:48.614915  540002 addons.go:239] Setting addon dashboard=true in "newest-cni-106601"
	I1129 09:04:48.614924  540002 addons.go:239] Setting addon metrics-server=true in "newest-cni-106601"
	W1129 09:04:48.614934  540002 addons.go:248] addon metrics-server should already be in state true
	I1129 09:04:48.614961  540002 host.go:66] Checking if "newest-cni-106601" exists ...
	W1129 09:04:48.614967  540002 addons.go:248] addon dashboard should already be in state true
	I1129 09:04:48.615000  540002 host.go:66] Checking if "newest-cni-106601" exists ...
	I1129 09:04:48.615405  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:48.615611  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:48.615889  540002 out.go:179] * Verifying Kubernetes components...
	I1129 09:04:48.618636  540002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:04:48.644501  540002 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:04:48.646523  540002 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 09:04:48.647541  540002 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:04:48.647608  540002 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:04:48.647708  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:48.660841  540002 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:04:48.661029  540002 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	4d1c856805d0a       56cc512116c8f       8 seconds ago       Running             busybox                   0                   439301fd61641       busybox                                                default
	66a3ae80c6174       52546a367cc9e       14 seconds ago      Running             coredns                   0                   567152cbf13bf       coredns-66bc5c9577-d7vmg                               kube-system
	a84a625c10a66       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   86786bfc75566       storage-provisioner                                    kube-system
	634267a48c9ee       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   85e39fdd58596       kindnet-g5whk                                          kube-system
	7383b28a1b358       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   c1a0327d519fc       kube-proxy-v9bbz                                       kube-system
	3effd19c5883f       5f1f5298c888d       37 seconds ago      Running             etcd                      0                   b1375bd22fe4c       etcd-default-k8s-diff-port-357829                      kube-system
	1018519011733       c3994bc696102       37 seconds ago      Running             kube-apiserver            0                   75476fab535a6       kube-apiserver-default-k8s-diff-port-357829            kube-system
	2a2e1928a205a       7dd6aaa1717ab       37 seconds ago      Running             kube-scheduler            0                   c5edcc06db6c2       kube-scheduler-default-k8s-diff-port-357829            kube-system
	30faae14a64ae       c80c8dbafe7dd       37 seconds ago      Running             kube-controller-manager   0                   2a6c01d6e1319       kube-controller-manager-default-k8s-diff-port-357829   kube-system
	
	
	==> containerd <==
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.909847368Z" level=info msg="StartContainer for \"a84a625c10a66eca43ad40359036d8f8bae7f97fdb8d57d903806a13bdd7de2d\""
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.912486134Z" level=info msg="connecting to shim a84a625c10a66eca43ad40359036d8f8bae7f97fdb8d57d903806a13bdd7de2d" address="unix:///run/containerd/s/668b6d16f551d3ab4b9d1881ee008512f38b3b8dbcc0ba011d854a44b74662b0" protocol=ttrpc version=3
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.921379893Z" level=info msg="CreateContainer within sandbox \"567152cbf13bff4c1d14dd2112fcd3e28303ca49c4e7030f50dd073b50549f88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.929829109Z" level=info msg="Container 66a3ae80c61746658348d60a62eab2930b5ad08cf7a1c909a1439060cdd1cdd7: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.938326164Z" level=info msg="CreateContainer within sandbox \"567152cbf13bff4c1d14dd2112fcd3e28303ca49c4e7030f50dd073b50549f88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66a3ae80c61746658348d60a62eab2930b5ad08cf7a1c909a1439060cdd1cdd7\""
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.939723621Z" level=info msg="StartContainer for \"66a3ae80c61746658348d60a62eab2930b5ad08cf7a1c909a1439060cdd1cdd7\""
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.942663414Z" level=info msg="connecting to shim 66a3ae80c61746658348d60a62eab2930b5ad08cf7a1c909a1439060cdd1cdd7" address="unix:///run/containerd/s/42fd1cb6f027819fd08220fa4a3d5c5af17174c387922f3a55bf6c8b2d55a665" protocol=ttrpc version=3
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.982196209Z" level=info msg="StartContainer for \"a84a625c10a66eca43ad40359036d8f8bae7f97fdb8d57d903806a13bdd7de2d\" returns successfully"
	Nov 29 09:04:35 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:35.012336135Z" level=info msg="StartContainer for \"66a3ae80c61746658348d60a62eab2930b5ad08cf7a1c909a1439060cdd1cdd7\" returns successfully"
	Nov 29 09:04:38 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:38.513802565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:a7187d53-caa5-4d82-a363-42dacbd45f01,Namespace:default,Attempt:0,}"
	Nov 29 09:04:38 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:38.552838921Z" level=info msg="connecting to shim 439301fd61641c2775c93c0f05d90c3ccbcf251a873c2c816b228c3de587e2f7" address="unix:///run/containerd/s/b878ca3f6f91937f9be62e602753711ebfd091f1a7aebb1d4bc44f7db49c49de" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:04:38 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:38.635176176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:a7187d53-caa5-4d82-a363-42dacbd45f01,Namespace:default,Attempt:0,} returns sandbox id \"439301fd61641c2775c93c0f05d90c3ccbcf251a873c2c816b228c3de587e2f7\""
	Nov 29 09:04:38 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:38.638695027Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.319501915Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.320676365Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396647"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.321872804Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.324015337Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.324719708Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.685903014s"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.324784343Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.328928236Z" level=info msg="CreateContainer within sandbox \"439301fd61641c2775c93c0f05d90c3ccbcf251a873c2c816b228c3de587e2f7\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.336417910Z" level=info msg="Container 4d1c856805d0aa751125afa04694d2d9343c8904fcdff215566e5c873c81af57: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.341768402Z" level=info msg="CreateContainer within sandbox \"439301fd61641c2775c93c0f05d90c3ccbcf251a873c2c816b228c3de587e2f7\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"4d1c856805d0aa751125afa04694d2d9343c8904fcdff215566e5c873c81af57\""
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.342354060Z" level=info msg="StartContainer for \"4d1c856805d0aa751125afa04694d2d9343c8904fcdff215566e5c873c81af57\""
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.343091405Z" level=info msg="connecting to shim 4d1c856805d0aa751125afa04694d2d9343c8904fcdff215566e5c873c81af57" address="unix:///run/containerd/s/b878ca3f6f91937f9be62e602753711ebfd091f1a7aebb1d4bc44f7db49c49de" protocol=ttrpc version=3
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.411995510Z" level=info msg="StartContainer for \"4d1c856805d0aa751125afa04694d2d9343c8904fcdff215566e5c873c81af57\" returns successfully"
	
	
	==> coredns [66a3ae80c61746658348d60a62eab2930b5ad08cf7a1c909a1439060cdd1cdd7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37370 - 56095 "HINFO IN 1396913741126626310.2881560545536060347. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033848092s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-357829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-357829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=default-k8s-diff-port-357829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_04_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:04:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-357829
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:04:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:04:48 +0000   Sat, 29 Nov 2025 09:04:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:04:48 +0000   Sat, 29 Nov 2025 09:04:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:04:48 +0000   Sat, 29 Nov 2025 09:04:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:04:48 +0000   Sat, 29 Nov 2025 09:04:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-357829
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c7cf2208-b787-4439-9b47-54475ca3d04f
	  Boot ID:                    b81dce2f-73d5-4349-b473-aa1210058cb8
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-d7vmg                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-default-k8s-diff-port-357829                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-g5whk                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-default-k8s-diff-port-357829             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-357829    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-v9bbz                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-default-k8s-diff-port-357829             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 39s   kubelet          Starting kubelet.
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  33s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node default-k8s-diff-port-357829 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node default-k8s-diff-port-357829 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node default-k8s-diff-port-357829 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node default-k8s-diff-port-357829 event: Registered Node default-k8s-diff-port-357829 in Controller
	  Normal  NodeReady                16s   kubelet          Node default-k8s-diff-port-357829 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 07:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001881] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084003] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.378167] i8042: Warning: Keylock active
	[  +0.012106] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.460417] block sda: the capability attribute has been deprecated.
	[  +0.079627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021012] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.285522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [3effd19c5883f9175a3107ccf1521e283880d674cd323abfdc755cebd4249c98] <==
	{"level":"warn","ts":"2025-11-29T09:04:14.286040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:14.299320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:14.306130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:14.325178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:14.332053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:14.339984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:14.384999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:21.166191Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.788993ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790340320339420 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/disruption-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/disruption-controller\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-29T09:04:21.166367Z","caller":"traceutil/trace.go:172","msg":"trace[1875009778] transaction","detail":"{read_only:false; response_revision:293; number_of_response:1; }","duration":"198.299577ms","start":"2025-11-29T09:04:20.968040Z","end":"2025-11-29T09:04:21.166339Z","steps":["trace[1875009778] 'process raft request'  (duration: 58.968887ms)","trace[1875009778] 'compare'  (duration: 138.643269ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-29T09:04:21.621068Z","caller":"traceutil/trace.go:172","msg":"trace[518274994] transaction","detail":"{read_only:false; response_revision:295; number_of_response:1; }","duration":"252.669049ms","start":"2025-11-29T09:04:21.368375Z","end":"2025-11-29T09:04:21.621045Z","steps":["trace[518274994] 'process raft request'  (duration: 252.540523ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:04:21.894100Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"269.958082ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-11-29T09:04:21.894169Z","caller":"traceutil/trace.go:172","msg":"trace[1295694699] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:295; }","duration":"270.043585ms","start":"2025-11-29T09:04:21.624109Z","end":"2025-11-29T09:04:21.894152Z","steps":["trace[1295694699] 'range keys from in-memory index tree'  (duration: 269.791727ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:04:22.144475Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.360675ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790340320339437 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-29T09:04:22.144700Z","caller":"traceutil/trace.go:172","msg":"trace[998990797] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"181.532983ms","start":"2025-11-29T09:04:21.963151Z","end":"2025-11-29T09:04:22.144684Z","steps":["trace[998990797] 'process raft request'  (duration: 53.909406ms)","trace[998990797] 'compare'  (duration: 127.236145ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-29T09:04:22.413146Z","caller":"traceutil/trace.go:172","msg":"trace[1094955470] transaction","detail":"{read_only:false; response_revision:303; number_of_response:1; }","duration":"136.357749ms","start":"2025-11-29T09:04:22.276767Z","end":"2025-11-29T09:04:22.413125Z","steps":["trace[1094955470] 'process raft request'  (duration: 136.301935ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:22.413339Z","caller":"traceutil/trace.go:172","msg":"trace[2041338908] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"138.342577ms","start":"2025-11-29T09:04:22.274974Z","end":"2025-11-29T09:04:22.413317Z","steps":["trace[2041338908] 'process raft request'  (duration: 137.932957ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:04:36.758138Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.915899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T09:04:36.758218Z","caller":"traceutil/trace.go:172","msg":"trace[275767652] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:419; }","duration":"184.010852ms","start":"2025-11-29T09:04:36.574189Z","end":"2025-11-29T09:04:36.758200Z","steps":["trace[275767652] 'agreement among raft nodes before linearized reading'  (duration: 54.241902ms)","trace[275767652] 'range keys from in-memory index tree'  (duration: 129.626691ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:04:36.758440Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.703317ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790340320339752 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.103.2\" mod_revision:389 > success:<request_put:<key:\"/registry/masterleases/192.168.103.2\" value_size:66 lease:4650418303465563942 >> failure:<request_range:<key:\"/registry/masterleases/192.168.103.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-29T09:04:36.758525Z","caller":"traceutil/trace.go:172","msg":"trace[1852514116] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"254.84566ms","start":"2025-11-29T09:04:36.503665Z","end":"2025-11-29T09:04:36.758511Z","steps":["trace[1852514116] 'process raft request'  (duration: 124.804294ms)","trace[1852514116] 'compare'  (duration: 129.604883ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:04:37.104066Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.146215ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T09:04:37.104150Z","caller":"traceutil/trace.go:172","msg":"trace[1155203678] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:420; }","duration":"154.253141ms","start":"2025-11-29T09:04:36.949879Z","end":"2025-11-29T09:04:37.104132Z","steps":["trace[1155203678] 'range keys from in-memory index tree'  (duration: 154.096983ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:04:37.104158Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.890411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-357829\" limit:1 ","response":"range_response_count:1 size:4532"}
	{"level":"info","ts":"2025-11-29T09:04:37.104207Z","caller":"traceutil/trace.go:172","msg":"trace[912961907] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-357829; range_end:; response_count:1; response_revision:420; }","duration":"130.95116ms","start":"2025-11-29T09:04:36.973245Z","end":"2025-11-29T09:04:37.104196Z","steps":["trace[912961907] 'range keys from in-memory index tree'  (duration: 130.725599ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:37.232523Z","caller":"traceutil/trace.go:172","msg":"trace[1297352301] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"121.783541ms","start":"2025-11-29T09:04:37.110720Z","end":"2025-11-29T09:04:37.232504Z","steps":["trace[1297352301] 'process raft request'  (duration: 121.632424ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:04:50 up  1:47,  0 user,  load average: 6.38, 3.92, 11.44
	Linux default-k8s-diff-port-357829 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [634267a48c9ee4a113f706b11c4923aa743934332d4a645040da54c768f74ea1] <==
	I1129 09:04:24.086389       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:04:24.086728       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1129 09:04:24.086942       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:04:24.086966       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:04:24.086981       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:04:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:04:24.386215       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:04:24.386242       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:04:24.386255       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:04:24.449870       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:04:24.786380       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:04:24.786426       1 metrics.go:72] Registering metrics
	I1129 09:04:24.786523       1 controller.go:711] "Syncing nftables rules"
	I1129 09:04:34.387272       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:04:34.387342       1 main.go:301] handling current node
	I1129 09:04:44.386061       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:04:44.386100       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1018519011733056917c1040d66f2f3b50adbe41e935b8e5e3a77ad04a4f2cec] <==
	E1129 09:04:14.946979       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1129 09:04:14.994469       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:04:14.997436       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:14.997495       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:04:15.001721       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:15.001766       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:04:15.086989       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:04:15.797278       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:04:15.801333       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:04:15.801350       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:04:16.347002       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:04:16.385850       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:04:16.504133       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:04:16.510527       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1129 09:04:16.511843       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:04:16.517264       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:04:16.820937       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:04:17.568388       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:04:17.585645       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:04:17.595141       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:04:22.420877       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:04:22.421650       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:22.428123       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:22.825109       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1129 09:04:48.392923       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:39430: use of closed network connection
	
	
	==> kube-controller-manager [30faae14a64ae82b07ed17cc7e4d78756201313e32105c0c66064b8bcc62bc83] <==
	I1129 09:04:22.222665       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 09:04:22.222774       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:04:22.223870       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:04:22.223966       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:04:22.224063       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:04:22.227214       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:04:22.232368       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:04:22.240832       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:04:22.247272       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:04:22.247362       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:04:22.253582       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:04:22.258800       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:04:22.262291       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-357829" podCIDRs=["10.244.0.0/24"]
	I1129 09:04:22.270065       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:04:22.270191       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:04:22.270228       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 09:04:22.270288       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 09:04:22.270813       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:04:22.273056       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:04:22.273542       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:04:22.275027       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:04:22.275174       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:04:22.275331       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-357829"
	I1129 09:04:22.275428       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 09:04:37.277596       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7383b28a1b35820a1e07133341cbb9130ac641d77de659266bcd4ac2296264e9] <==
	I1129 09:04:23.529937       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:04:23.617952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:04:23.718513       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:04:23.718567       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1129 09:04:23.718693       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:04:23.748087       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:04:23.748147       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:04:23.756410       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:04:23.757352       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:04:23.757747       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:04:23.760667       1 config.go:200] "Starting service config controller"
	I1129 09:04:23.760782       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:04:23.760694       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:04:23.760806       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:04:23.760929       1 config.go:309] "Starting node config controller"
	I1129 09:04:23.760945       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:04:23.760723       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:04:23.763727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:04:23.765878       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:04:23.861230       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:04:23.861252       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:04:23.861230       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [2a2e1928a205a6f671a9d953f408cc7a51eec7b6e0e412ec88c2b9238beb6579] <==
	E1129 09:04:14.856341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:04:14.856812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:04:14.856859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:04:14.856989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:04:14.857090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:04:14.857188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:04:14.857883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:04:14.857912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:04:14.857967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:04:14.858299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:04:14.858353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:04:14.858310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:04:15.696901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:04:15.720319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:04:15.741496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:04:15.742325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:04:15.783296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:04:15.810461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:04:15.902316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:04:15.903213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:04:16.045558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 09:04:16.062054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:04:16.090151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:04:16.100776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1129 09:04:17.752776       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:04:18 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:18.497769    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-357829" podStartSLOduration=1.497724054 podStartE2EDuration="1.497724054s" podCreationTimestamp="2025-11-29 09:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:18.488337798 +0000 UTC m=+1.143701825" watchObservedRunningTime="2025-11-29 09:04:18.497724054 +0000 UTC m=+1.153088082"
	Nov 29 09:04:18 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:18.497921    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-357829" podStartSLOduration=2.497911863 podStartE2EDuration="2.497911863s" podCreationTimestamp="2025-11-29 09:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:18.497701309 +0000 UTC m=+1.153065337" watchObservedRunningTime="2025-11-29 09:04:18.497911863 +0000 UTC m=+1.153275890"
	Nov 29 09:04:18 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:18.523513    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-357829" podStartSLOduration=1.523492159 podStartE2EDuration="1.523492159s" podCreationTimestamp="2025-11-29 09:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:18.510562949 +0000 UTC m=+1.165926974" watchObservedRunningTime="2025-11-29 09:04:18.523492159 +0000 UTC m=+1.178856187"
	Nov 29 09:04:18 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:18.535948    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-357829" podStartSLOduration=3.535930395 podStartE2EDuration="3.535930395s" podCreationTimestamp="2025-11-29 09:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:18.523930739 +0000 UTC m=+1.179294767" watchObservedRunningTime="2025-11-29 09:04:18.535930395 +0000 UTC m=+1.191294423"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.315595    1421 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.317154    1421 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.865831    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxx59\" (UniqueName: \"kubernetes.io/projected/6a515c70-840f-41c2-b1e4-6de13b23e5f3-kube-api-access-qxx59\") pod \"kube-proxy-v9bbz\" (UID: \"6a515c70-840f-41c2-b1e4-6de13b23e5f3\") " pod="kube-system/kube-proxy-v9bbz"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.865884    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a515c70-840f-41c2-b1e4-6de13b23e5f3-lib-modules\") pod \"kube-proxy-v9bbz\" (UID: \"6a515c70-840f-41c2-b1e4-6de13b23e5f3\") " pod="kube-system/kube-proxy-v9bbz"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.865917    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a515c70-840f-41c2-b1e4-6de13b23e5f3-kube-proxy\") pod \"kube-proxy-v9bbz\" (UID: \"6a515c70-840f-41c2-b1e4-6de13b23e5f3\") " pod="kube-system/kube-proxy-v9bbz"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.865941    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a515c70-840f-41c2-b1e4-6de13b23e5f3-xtables-lock\") pod \"kube-proxy-v9bbz\" (UID: \"6a515c70-840f-41c2-b1e4-6de13b23e5f3\") " pod="kube-system/kube-proxy-v9bbz"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.967208    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgj94\" (UniqueName: \"kubernetes.io/projected/5563c069-5b20-4835-941c-48eb3b04c051-kube-api-access-bgj94\") pod \"kindnet-g5whk\" (UID: \"5563c069-5b20-4835-941c-48eb3b04c051\") " pod="kube-system/kindnet-g5whk"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.967623    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5563c069-5b20-4835-941c-48eb3b04c051-lib-modules\") pod \"kindnet-g5whk\" (UID: \"5563c069-5b20-4835-941c-48eb3b04c051\") " pod="kube-system/kindnet-g5whk"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.967695    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5563c069-5b20-4835-941c-48eb3b04c051-xtables-lock\") pod \"kindnet-g5whk\" (UID: \"5563c069-5b20-4835-941c-48eb3b04c051\") " pod="kube-system/kindnet-g5whk"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.967744    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5563c069-5b20-4835-941c-48eb3b04c051-cni-cfg\") pod \"kindnet-g5whk\" (UID: \"5563c069-5b20-4835-941c-48eb3b04c051\") " pod="kube-system/kindnet-g5whk"
	Nov 29 09:04:24 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:24.491925    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v9bbz" podStartSLOduration=2.491903092 podStartE2EDuration="2.491903092s" podCreationTimestamp="2025-11-29 09:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:24.491601661 +0000 UTC m=+7.146965690" watchObservedRunningTime="2025-11-29 09:04:24.491903092 +0000 UTC m=+7.147267120"
	Nov 29 09:04:24 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:24.502218    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-g5whk" podStartSLOduration=2.502192331 podStartE2EDuration="2.502192331s" podCreationTimestamp="2025-11-29 09:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:24.501891007 +0000 UTC m=+7.157255036" watchObservedRunningTime="2025-11-29 09:04:24.502192331 +0000 UTC m=+7.157556360"
	Nov 29 09:04:34 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:34.427252    1421 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:04:34 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:34.560153    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cgmt\" (UniqueName: \"kubernetes.io/projected/d9aa47c6-1005-4a91-a986-819f21c0cfda-kube-api-access-8cgmt\") pod \"storage-provisioner\" (UID: \"d9aa47c6-1005-4a91-a986-819f21c0cfda\") " pod="kube-system/storage-provisioner"
	Nov 29 09:04:34 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:34.560223    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ebe88f4-4c20-4523-8642-f54615c1f605-config-volume\") pod \"coredns-66bc5c9577-d7vmg\" (UID: \"4ebe88f4-4c20-4523-8642-f54615c1f605\") " pod="kube-system/coredns-66bc5c9577-d7vmg"
	Nov 29 09:04:34 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:34.560250    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg92d\" (UniqueName: \"kubernetes.io/projected/4ebe88f4-4c20-4523-8642-f54615c1f605-kube-api-access-mg92d\") pod \"coredns-66bc5c9577-d7vmg\" (UID: \"4ebe88f4-4c20-4523-8642-f54615c1f605\") " pod="kube-system/coredns-66bc5c9577-d7vmg"
	Nov 29 09:04:34 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:34.560353    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d9aa47c6-1005-4a91-a986-819f21c0cfda-tmp\") pod \"storage-provisioner\" (UID: \"d9aa47c6-1005-4a91-a986-819f21c0cfda\") " pod="kube-system/storage-provisioner"
	Nov 29 09:04:35 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:35.541121    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d7vmg" podStartSLOduration=12.541098367 podStartE2EDuration="12.541098367s" podCreationTimestamp="2025-11-29 09:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:35.529794717 +0000 UTC m=+18.185158744" watchObservedRunningTime="2025-11-29 09:04:35.541098367 +0000 UTC m=+18.196462398"
	Nov 29 09:04:35 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:35.541254    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.541246386 podStartE2EDuration="12.541246386s" podCreationTimestamp="2025-11-29 09:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:35.540644121 +0000 UTC m=+18.196008145" watchObservedRunningTime="2025-11-29 09:04:35.541246386 +0000 UTC m=+18.196610414"
	Nov 29 09:04:38 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:38.285926    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kkmd\" (UniqueName: \"kubernetes.io/projected/a7187d53-caa5-4d82-a363-42dacbd45f01-kube-api-access-4kkmd\") pod \"busybox\" (UID: \"a7187d53-caa5-4d82-a363-42dacbd45f01\") " pod="default/busybox"
	Nov 29 09:04:41 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:41.550469    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.862618884 podStartE2EDuration="3.550445582s" podCreationTimestamp="2025-11-29 09:04:38 +0000 UTC" firstStartedPulling="2025-11-29 09:04:38.637975097 +0000 UTC m=+21.293339104" lastFinishedPulling="2025-11-29 09:04:41.325801795 +0000 UTC m=+23.981165802" observedRunningTime="2025-11-29 09:04:41.550305968 +0000 UTC m=+24.205669996" watchObservedRunningTime="2025-11-29 09:04:41.550445582 +0000 UTC m=+24.205809611"
	
	
	==> storage-provisioner [a84a625c10a66eca43ad40359036d8f8bae7f97fdb8d57d903806a13bdd7de2d] <==
	I1129 09:04:34.993932       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1129 09:04:35.006256       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1129 09:04:35.006308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:04:35.009364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:35.015713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:04:35.016055       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:04:35.016415       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-357829_b6a1c520-de9d-494e-adbf-4b6205489313!
	I1129 09:04:35.016648       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d490b20-7a86-4524-bb18-37c00fb6dca1", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-357829_b6a1c520-de9d-494e-adbf-4b6205489313 became leader
	W1129 09:04:35.021349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:35.025551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:04:35.117019       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-357829_b6a1c520-de9d-494e-adbf-4b6205489313!
	W1129 09:04:37.105354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:37.233774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:39.237067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:39.241532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:41.244603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:41.250904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:43.254461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:43.259439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:45.263604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:45.268407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:47.272639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:47.279094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:49.282777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:49.290195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-357829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-357829
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-357829:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19",
	        "Created": "2025-11-29T09:03:58.829078857Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 525127,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-29T09:03:58.876457486Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19/hostname",
	        "HostsPath": "/var/lib/docker/containers/05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19/hosts",
	        "LogPath": "/var/lib/docker/containers/05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19/05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19-json.log",
	        "Name": "/default-k8s-diff-port-357829",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-357829:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-357829",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05de3679451a55dfc8fb4f57b250faa8e463d0b965a1c1b1576b246b02697d19",
	                "LowerDir": "/var/lib/docker/overlay2/769eeeb85005464c69c79c6358da0e6738b1584b9f5c0a657dda7b63cc2652e4-init/diff:/var/lib/docker/overlay2/eb180691bce18b8d981b2d61ed0962851c615364ed77c18ff66d559424569005/diff",
	                "MergedDir": "/var/lib/docker/overlay2/769eeeb85005464c69c79c6358da0e6738b1584b9f5c0a657dda7b63cc2652e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/769eeeb85005464c69c79c6358da0e6738b1584b9f5c0a657dda7b63cc2652e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/769eeeb85005464c69c79c6358da0e6738b1584b9f5c0a657dda7b63cc2652e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-357829",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-357829/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-357829",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-357829",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-357829",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d3783ab61f07fb536c578fe5694915165cec9448bb8b6b991ad6987f87f01ef0",
	            "SandboxKey": "/var/run/docker/netns/d3783ab61f07",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-357829": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f17ce87ca21659c5da3c274e1459137df3b8383021f2c5ec9c0cce59ba7e7b7c",
	                    "EndpointID": "f94e99d27baba5f3fb52c96f5af6417ce5e6509bebf3eb66f40c6165550e9014",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "46:49:5e:16:5e:f4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-357829",
	                        "05de3679451a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-357829 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-357829 logs -n 25: (1.210446983s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ pause   │ -p old-k8s-version-295154 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ unpause │ -p old-k8s-version-295154 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p old-k8s-version-295154                                                                                                                                                                                                                           │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p old-k8s-version-295154                                                                                                                                                                                                                           │ old-k8s-version-295154       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ start   │ -p embed-certs-976238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-976238           │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:04 UTC │
	│ image   │ no-preload-924441 image list --format=json                                                                                                                                                                                                          │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ pause   │ -p no-preload-924441 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ unpause │ -p no-preload-924441 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p no-preload-924441                                                                                                                                                                                                                                │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p no-preload-924441                                                                                                                                                                                                                                │ no-preload-924441            │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ delete  │ -p disable-driver-mounts-286131                                                                                                                                                                                                                     │ disable-driver-mounts-286131 │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:03 UTC │
	│ start   │ -p default-k8s-diff-port-357829 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-357829 │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p cert-expiration-368536 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-368536       │ jenkins │ v1.37.0 │ 29 Nov 25 09:03 UTC │ 29 Nov 25 09:04 UTC │
	│ delete  │ -p cert-expiration-368536                                                                                                                                                                                                                           │ cert-expiration-368536       │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p newest-cni-106601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-806701    │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	│ start   │ -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-806701    │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ delete  │ -p kubernetes-upgrade-806701                                                                                                                                                                                                                        │ kubernetes-upgrade-806701    │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p auto-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-770004                  │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-106601 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ stop    │ -p newest-cni-106601 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ addons  │ enable metrics-server -p embed-certs-976238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-976238           │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ addons  │ enable dashboard -p newest-cni-106601 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │ 29 Nov 25 09:04 UTC │
	│ start   │ -p newest-cni-106601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-106601            │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	│ stop    │ -p embed-certs-976238 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-976238           │ jenkins │ v1.37.0 │ 29 Nov 25 09:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:04:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:04:41.406685  540002 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:04:41.406896  540002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:41.406909  540002 out.go:374] Setting ErrFile to fd 2...
	I1129 09:04:41.406915  540002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:04:41.407223  540002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 09:04:41.407865  540002 out.go:368] Setting JSON to false
	I1129 09:04:41.409558  540002 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6425,"bootTime":1764400656,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:04:41.409641  540002 start.go:143] virtualization: kvm guest
	I1129 09:04:41.411942  540002 out.go:179] * [newest-cni-106601] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:04:41.413320  540002 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:04:41.413316  540002 notify.go:221] Checking for updates...
	I1129 09:04:41.414665  540002 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:04:41.415856  540002 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:04:41.417157  540002 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 09:04:41.418186  540002 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:04:41.419933  540002 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:04:41.421880  540002 config.go:182] Loaded profile config "newest-cni-106601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:41.422773  540002 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:04:41.451869  540002 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:04:41.452091  540002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:04:41.525075  540002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:04:41.512630655 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:04:41.525184  540002 docker.go:319] overlay module found
	I1129 09:04:41.527287  540002 out.go:179] * Using the docker driver based on existing profile
	I1129 09:04:41.528414  540002 start.go:309] selected driver: docker
	I1129 09:04:41.528429  540002 start.go:927] validating driver "docker" against &{Name:newest-cni-106601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-106601 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:04:41.528547  540002 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:04:41.529197  540002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:04:41.609608  540002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-29 09:04:41.59502198 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:04:41.610075  540002 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1129 09:04:41.610126  540002 cni.go:84] Creating CNI manager for ""
	I1129 09:04:41.610204  540002 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:04:41.610267  540002 start.go:353] cluster config:
	{Name:newest-cni-106601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-106601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:04:41.611758  540002 out.go:179] * Starting "newest-cni-106601" primary control-plane node in "newest-cni-106601" cluster
	I1129 09:04:41.612743  540002 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 09:04:41.613866  540002 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1129 09:04:38.189697  535908 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-770004:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.71853773s)
	I1129 09:04:38.189726  535908 kic.go:203] duration metric: took 4.718712191s to extract preloaded images to volume ...
	W1129 09:04:38.189899  535908 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1129 09:04:38.189945  535908 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1129 09:04:38.189986  535908 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1129 09:04:38.290030  535908 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-770004 --name auto-770004 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-770004 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-770004 --network auto-770004 --ip 192.168.85.2 --volume auto-770004:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1129 09:04:38.646034  535908 cli_runner.go:164] Run: docker container inspect auto-770004 --format={{.State.Running}}
	I1129 09:04:38.667719  535908 cli_runner.go:164] Run: docker container inspect auto-770004 --format={{.State.Status}}
	I1129 09:04:38.692064  535908 cli_runner.go:164] Run: docker exec auto-770004 stat /var/lib/dpkg/alternatives/iptables
	I1129 09:04:38.750534  535908 oci.go:144] the created container "auto-770004" has a running status.
	I1129 09:04:38.750568  535908 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa...
	I1129 09:04:38.989860  535908 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1129 09:04:39.032952  535908 cli_runner.go:164] Run: docker container inspect auto-770004 --format={{.State.Status}}
	I1129 09:04:39.060809  535908 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1129 09:04:39.060833  535908 kic_runner.go:114] Args: [docker exec --privileged auto-770004 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1129 09:04:39.124616  535908 cli_runner.go:164] Run: docker container inspect auto-770004 --format={{.State.Status}}
	I1129 09:04:39.150282  535908 machine.go:94] provisionDockerMachine start ...
	I1129 09:04:39.150391  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:39.171969  535908 main.go:143] libmachine: Using SSH client type: native
	I1129 09:04:39.172233  535908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1129 09:04:39.172251  535908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:04:39.326141  535908 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-770004
	
	I1129 09:04:39.326169  535908 ubuntu.go:182] provisioning hostname "auto-770004"
	I1129 09:04:39.326224  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:39.351481  535908 main.go:143] libmachine: Using SSH client type: native
	I1129 09:04:39.351888  535908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1129 09:04:39.351913  535908 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-770004 && echo "auto-770004" | sudo tee /etc/hostname
	I1129 09:04:39.523428  535908 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-770004
	
	I1129 09:04:39.523533  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:39.546768  535908 main.go:143] libmachine: Using SSH client type: native
	I1129 09:04:39.547089  535908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1129 09:04:39.547118  535908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-770004' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-770004/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-770004' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:04:39.699176  535908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:04:39.699211  535908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-255825/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-255825/.minikube}
	I1129 09:04:39.699242  535908 ubuntu.go:190] setting up certificates
	I1129 09:04:39.699256  535908 provision.go:84] configureAuth start
	I1129 09:04:39.699338  535908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-770004
	I1129 09:04:39.725652  535908 provision.go:143] copyHostCerts
	I1129 09:04:39.725713  535908 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem, removing ...
	I1129 09:04:39.725723  535908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem
	I1129 09:04:39.725826  535908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem (1679 bytes)
	I1129 09:04:39.725973  535908 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem, removing ...
	I1129 09:04:39.725988  535908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem
	I1129 09:04:39.726034  535908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem (1078 bytes)
	I1129 09:04:39.726129  535908 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem, removing ...
	I1129 09:04:39.726140  535908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem
	I1129 09:04:39.726178  535908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem (1123 bytes)
	I1129 09:04:39.726282  535908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem org=jenkins.auto-770004 san=[127.0.0.1 192.168.85.2 auto-770004 localhost minikube]
	I1129 09:04:39.845319  535908 provision.go:177] copyRemoteCerts
	I1129 09:04:39.845397  535908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:04:39.845449  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:39.868062  535908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa Username:docker}
	I1129 09:04:39.976936  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1129 09:04:39.997143  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:04:40.016432  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:04:40.038786  535908 provision.go:87] duration metric: took 339.511821ms to configureAuth
	I1129 09:04:40.038817  535908 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:04:40.039047  535908 config.go:182] Loaded profile config "auto-770004": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:40.039068  535908 machine.go:97] duration metric: took 888.764096ms to provisionDockerMachine
	I1129 09:04:40.039080  535908 client.go:176] duration metric: took 7.110115816s to LocalClient.Create
	I1129 09:04:40.039108  535908 start.go:167] duration metric: took 7.110183776s to libmachine.API.Create "auto-770004"
	I1129 09:04:40.039116  535908 start.go:293] postStartSetup for "auto-770004" (driver="docker")
	I1129 09:04:40.039128  535908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:04:40.039188  535908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:04:40.039243  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:40.063572  535908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa Username:docker}
	I1129 09:04:40.173901  535908 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:04:40.177658  535908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:04:40.177689  535908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:04:40.177703  535908 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/addons for local assets ...
	I1129 09:04:40.177795  535908 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/files for local assets ...
	I1129 09:04:40.177915  535908 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem -> 2594832.pem in /etc/ssl/certs
	I1129 09:04:40.178047  535908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:04:40.187514  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:04:40.213157  535908 start.go:296] duration metric: took 174.012404ms for postStartSetup
	I1129 09:04:40.213579  535908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-770004
	I1129 09:04:40.233634  535908 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/config.json ...
	I1129 09:04:40.234009  535908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:04:40.234050  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:40.259627  535908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa Username:docker}
	I1129 09:04:40.363903  535908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:04:40.371115  535908 start.go:128] duration metric: took 7.44457847s to createHost
	I1129 09:04:40.371144  535908 start.go:83] releasing machines lock for "auto-770004", held for 7.444700159s
	I1129 09:04:40.371228  535908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-770004
	I1129 09:04:40.394654  535908 ssh_runner.go:195] Run: cat /version.json
	I1129 09:04:40.394714  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:40.394721  535908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:04:40.394822  535908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-770004
	I1129 09:04:40.416903  535908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa Username:docker}
	I1129 09:04:40.417398  535908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/auto-770004/id_rsa Username:docker}
	I1129 09:04:40.519871  535908 ssh_runner.go:195] Run: systemctl --version
	I1129 09:04:40.582471  535908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:04:40.587256  535908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:04:40.587318  535908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:04:40.630527  535908 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:04:40.630556  535908 start.go:496] detecting cgroup driver to use...
	I1129 09:04:40.630589  535908 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:04:40.630635  535908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:04:40.648104  535908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:04:40.672777  535908 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:04:40.672843  535908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:04:40.694694  535908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:04:40.713598  535908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:04:40.801300  535908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:04:40.889574  535908 docker.go:234] disabling docker service ...
	I1129 09:04:40.889634  535908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:04:40.913072  535908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:04:40.927112  535908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:04:41.012009  535908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:04:41.099449  535908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:04:41.113880  535908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:04:41.129786  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:04:41.142654  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:04:41.152132  535908 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1129 09:04:41.152206  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1129 09:04:41.161918  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:04:41.171727  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:04:41.181216  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:04:41.190369  535908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:04:41.198895  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:04:41.208075  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:04:41.217523  535908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:04:41.227378  535908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:04:41.237589  535908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:04:41.248203  535908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:04:41.370706  535908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:04:41.510398  535908 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:04:41.510512  535908 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:04:41.515569  535908 start.go:564] Will wait 60s for crictl version
	I1129 09:04:41.515631  535908 ssh_runner.go:195] Run: which crictl
	I1129 09:04:41.521512  535908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:04:41.562942  535908 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:04:41.563038  535908 ssh_runner.go:195] Run: containerd --version
	I1129 09:04:41.595112  535908 ssh_runner.go:195] Run: containerd --version
	I1129 09:04:41.630089  535908 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:04:41.614876  540002 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:04:41.614918  540002 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1129 09:04:41.614944  540002 cache.go:65] Caching tarball of preloaded images
	I1129 09:04:41.614990  540002 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 09:04:41.615046  540002 preload.go:238] Found /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1129 09:04:41.615057  540002 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1129 09:04:41.615225  540002 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/config.json ...
	I1129 09:04:41.641696  540002 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1129 09:04:41.641721  540002 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1129 09:04:41.641766  540002 cache.go:243] Successfully downloaded all kic artifacts
	I1129 09:04:41.641806  540002 start.go:360] acquireMachinesLock for newest-cni-106601: {Name:mk30620cdf9d2fed47ccfe496a0ec3101f264b78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:04:41.641883  540002 start.go:364] duration metric: took 47.588µs to acquireMachinesLock for "newest-cni-106601"
	I1129 09:04:41.641909  540002 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:04:41.641916  540002 fix.go:54] fixHost starting: 
	I1129 09:04:41.642228  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:41.664422  540002 fix.go:112] recreateIfNeeded on newest-cni-106601: state=Stopped err=<nil>
	W1129 09:04:41.664460  540002 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:04:41.631148  535908 cli_runner.go:164] Run: docker network inspect auto-770004 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:04:41.656328  535908 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1129 09:04:41.661185  535908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:04:41.674264  535908 kubeadm.go:884] updating cluster {Name:auto-770004 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-770004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:04:41.674413  535908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:04:41.674477  535908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:04:41.707720  535908 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:04:41.707768  535908 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:04:41.707835  535908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:04:41.751908  535908 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:04:41.751933  535908 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:04:41.751942  535908 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1129 09:04:41.752060  535908 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-770004 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-770004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:04:41.752134  535908 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:04:41.783606  535908 cni.go:84] Creating CNI manager for ""
	I1129 09:04:41.783646  535908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:04:41.783669  535908 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:04:41.783781  535908 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-770004 NodeName:auto-770004 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:04:41.783973  535908 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-770004"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:04:41.784072  535908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:04:41.795810  535908 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:04:41.795888  535908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:04:41.807186  535908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1129 09:04:41.829936  535908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:04:41.846472  535908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1129 09:04:41.861636  535908 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:04:41.866145  535908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:04:41.877099  535908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:04:41.987642  535908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:04:42.013333  535908 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004 for IP: 192.168.85.2
	I1129 09:04:42.013358  535908 certs.go:195] generating shared ca certs ...
	I1129 09:04:42.013381  535908 certs.go:227] acquiring lock for ca certs: {Name:mk5e6bcae0a6944966b241f3c6197a472703c991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.013560  535908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key
	I1129 09:04:42.013620  535908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key
	I1129 09:04:42.013636  535908 certs.go:257] generating profile certs ...
	I1129 09:04:42.013707  535908 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/client.key
	I1129 09:04:42.013722  535908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/client.crt with IP's: []
	I1129 09:04:42.109549  535908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/client.crt ...
	I1129 09:04:42.109586  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/client.crt: {Name:mkdcba972ae9b889a10497b78b0dc5d8c10c2bfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.109805  535908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/client.key ...
	I1129 09:04:42.109825  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/client.key: {Name:mkf7487fe7304f26b8555354153479495769bc80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.110389  535908 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.key.e745c512
	I1129 09:04:42.110418  535908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.crt.e745c512 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1129 09:04:42.238544  535908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.crt.e745c512 ...
	I1129 09:04:42.238581  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.crt.e745c512: {Name:mk74a351aab7f154207df9b146b6f8aea1c9ceaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.238784  535908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.key.e745c512 ...
	I1129 09:04:42.238806  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.key.e745c512: {Name:mkac3795cb8d46fbcc479466786f252fce972f81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.238920  535908 certs.go:382] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.crt.e745c512 -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.crt
	I1129 09:04:42.239048  535908 certs.go:386] copying /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.key.e745c512 -> /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.key
	I1129 09:04:42.239134  535908 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.key
	I1129 09:04:42.239158  535908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.crt with IP's: []
	I1129 09:04:42.313498  535908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.crt ...
	I1129 09:04:42.313530  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.crt: {Name:mke6e95b0965e72c9ea4f083e3554e515a0c98ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.313728  535908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.key ...
	I1129 09:04:42.313766  535908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.key: {Name:mk27d0487ae43c58adb88322643701996dcf764e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:42.313984  535908 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem (1338 bytes)
	W1129 09:04:42.314034  535908 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483_empty.pem, impossibly tiny 0 bytes
	I1129 09:04:42.314049  535908 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:04:42.314090  535908 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:04:42.314134  535908 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:04:42.314169  535908 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem (1679 bytes)
	I1129 09:04:42.314234  535908 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:04:42.314900  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:04:42.334704  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:04:42.354378  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:04:42.372961  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:04:42.390309  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1129 09:04:42.407401  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 09:04:42.424315  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:04:42.440905  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/auto-770004/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:04:42.457837  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /usr/share/ca-certificates/2594832.pem (1708 bytes)
	I1129 09:04:42.477679  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:04:42.494573  535908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem --> /usr/share/ca-certificates/259483.pem (1338 bytes)
	I1129 09:04:42.511783  535908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:04:42.523853  535908 ssh_runner.go:195] Run: openssl version
	I1129 09:04:42.529876  535908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:04:42.537772  535908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:04:42.541322  535908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:04:42.541377  535908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:04:42.575696  535908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:04:42.583979  535908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259483.pem && ln -fs /usr/share/ca-certificates/259483.pem /etc/ssl/certs/259483.pem"
	I1129 09:04:42.592956  535908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259483.pem
	I1129 09:04:42.596356  535908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:35 /usr/share/ca-certificates/259483.pem
	I1129 09:04:42.596406  535908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259483.pem
	I1129 09:04:42.630672  535908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259483.pem /etc/ssl/certs/51391683.0"
	I1129 09:04:42.638924  535908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2594832.pem && ln -fs /usr/share/ca-certificates/2594832.pem /etc/ssl/certs/2594832.pem"
	I1129 09:04:42.647015  535908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2594832.pem
	I1129 09:04:42.650506  535908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:35 /usr/share/ca-certificates/2594832.pem
	I1129 09:04:42.650553  535908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2594832.pem
	I1129 09:04:42.685279  535908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2594832.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:04:42.693682  535908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:04:42.697222  535908 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:04:42.697289  535908 kubeadm.go:401] StartCluster: {Name:auto-770004 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-770004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:04:42.697392  535908 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:04:42.697452  535908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:04:42.723391  535908 cri.go:89] found id: ""
	I1129 09:04:42.723441  535908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:04:42.731192  535908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:04:42.738974  535908 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1129 09:04:42.739026  535908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:04:42.746563  535908 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:04:42.746585  535908 kubeadm.go:158] found existing configuration files:
	
	I1129 09:04:42.746623  535908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:04:42.753939  535908 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:04:42.753983  535908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:04:42.761213  535908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:04:42.768329  535908 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:04:42.768378  535908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:04:42.775477  535908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:04:42.782500  535908 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:04:42.782548  535908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:04:42.789797  535908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:04:42.797258  535908 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:04:42.797292  535908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:04:42.804548  535908 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1129 09:04:42.841791  535908 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:04:42.841871  535908 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:04:42.873015  535908 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1129 09:04:42.873127  535908 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1129 09:04:42.873194  535908 kubeadm.go:319] OS: Linux
	I1129 09:04:42.873294  535908 kubeadm.go:319] CGROUPS_CPU: enabled
	I1129 09:04:42.873390  535908 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1129 09:04:42.873459  535908 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1129 09:04:42.873554  535908 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1129 09:04:42.873626  535908 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1129 09:04:42.873705  535908 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1129 09:04:42.873796  535908 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1129 09:04:42.873900  535908 kubeadm.go:319] CGROUPS_IO: enabled
	I1129 09:04:42.933116  535908 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:04:42.933241  535908 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:04:42.933401  535908 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:04:42.938520  535908 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:04:41.665897  540002 out.go:252] * Restarting existing docker container for "newest-cni-106601" ...
	I1129 09:04:41.665963  540002 cli_runner.go:164] Run: docker start newest-cni-106601
	I1129 09:04:41.965100  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:41.988619  540002 kic.go:430] container "newest-cni-106601" state is running.
	I1129 09:04:41.989265  540002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-106601
	I1129 09:04:42.013932  540002 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/config.json ...
	I1129 09:04:42.014319  540002 machine.go:94] provisionDockerMachine start ...
	I1129 09:04:42.014386  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:42.038645  540002 main.go:143] libmachine: Using SSH client type: native
	I1129 09:04:42.039099  540002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1129 09:04:42.039132  540002 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:04:42.039917  540002 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42350->127.0.0.1:33098: read: connection reset by peer
	I1129 09:04:45.186201  540002 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-106601
	
	I1129 09:04:45.186239  540002 ubuntu.go:182] provisioning hostname "newest-cni-106601"
	I1129 09:04:45.186311  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:45.205701  540002 main.go:143] libmachine: Using SSH client type: native
	I1129 09:04:45.205946  540002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1129 09:04:45.205975  540002 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-106601 && echo "newest-cni-106601" | sudo tee /etc/hostname
	I1129 09:04:45.365797  540002 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-106601
	
	I1129 09:04:45.365893  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:45.385791  540002 main.go:143] libmachine: Using SSH client type: native
	I1129 09:04:45.386026  540002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1129 09:04:45.386043  540002 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-106601' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-106601/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-106601' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:04:45.533770  540002 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:04:45.533805  540002 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22000-255825/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-255825/.minikube}
	I1129 09:04:45.533869  540002 ubuntu.go:190] setting up certificates
	I1129 09:04:45.533886  540002 provision.go:84] configureAuth start
	I1129 09:04:45.533967  540002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-106601
	I1129 09:04:45.552789  540002 provision.go:143] copyHostCerts
	I1129 09:04:45.552863  540002 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem, removing ...
	I1129 09:04:45.552880  540002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem
	I1129 09:04:45.552963  540002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/ca.pem (1078 bytes)
	I1129 09:04:45.553084  540002 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem, removing ...
	I1129 09:04:45.553098  540002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem
	I1129 09:04:45.553150  540002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/cert.pem (1123 bytes)
	I1129 09:04:45.553239  540002 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem, removing ...
	I1129 09:04:45.553250  540002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem
	I1129 09:04:45.553292  540002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-255825/.minikube/key.pem (1679 bytes)
	I1129 09:04:45.553451  540002 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem org=jenkins.newest-cni-106601 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-106601]
	I1129 09:04:45.643862  540002 provision.go:177] copyRemoteCerts
	I1129 09:04:45.643930  540002 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:04:45.643980  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:45.662982  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:45.765523  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1129 09:04:45.783749  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1129 09:04:45.801293  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:04:45.819586  540002 provision.go:87] duration metric: took 285.680751ms to configureAuth
	I1129 09:04:45.819618  540002 ubuntu.go:206] setting minikube options for container-runtime
	I1129 09:04:45.819866  540002 config.go:182] Loaded profile config "newest-cni-106601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:45.819880  540002 machine.go:97] duration metric: took 3.805545616s to provisionDockerMachine
	I1129 09:04:45.819890  540002 start.go:293] postStartSetup for "newest-cni-106601" (driver="docker")
	I1129 09:04:45.819901  540002 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:04:45.819955  540002 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:04:45.820002  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:45.837708  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:45.942368  540002 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:04:45.946448  540002 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1129 09:04:45.946494  540002 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1129 09:04:45.946506  540002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/addons for local assets ...
	I1129 09:04:45.946557  540002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-255825/.minikube/files for local assets ...
	I1129 09:04:45.946635  540002 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem -> 2594832.pem in /etc/ssl/certs
	I1129 09:04:45.946727  540002 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:04:45.954934  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:04:45.972792  540002 start.go:296] duration metric: took 152.887271ms for postStartSetup
	I1129 09:04:45.972880  540002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:04:45.972936  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:45.990527  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:46.090401  540002 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1129 09:04:46.095289  540002 fix.go:56] duration metric: took 4.453365836s for fixHost
	I1129 09:04:46.095323  540002 start.go:83] releasing machines lock for "newest-cni-106601", held for 4.453422843s
	I1129 09:04:46.095414  540002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-106601
	I1129 09:04:46.114930  540002 ssh_runner.go:195] Run: cat /version.json
	I1129 09:04:46.114989  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:46.114986  540002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:04:46.115076  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:46.133328  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:46.135631  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:46.233362  540002 ssh_runner.go:195] Run: systemctl --version
	I1129 09:04:46.287472  540002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:04:46.292631  540002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:04:46.292693  540002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:04:46.301125  540002 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:04:46.301154  540002 start.go:496] detecting cgroup driver to use...
	I1129 09:04:46.301189  540002 detect.go:190] detected "systemd" cgroup driver on host os
	I1129 09:04:46.301232  540002 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1129 09:04:46.318115  540002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1129 09:04:46.331824  540002 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:04:46.331873  540002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:04:46.347455  540002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:04:46.360693  540002 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:04:42.940384  535908 out.go:252]   - Generating certificates and keys ...
	I1129 09:04:42.940483  535908 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:04:42.940576  535908 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:04:43.280978  535908 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 09:04:43.639954  535908 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:04:43.699103  535908 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:04:43.989174  535908 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:04:44.307451  535908 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:04:44.307572  535908 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-770004 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:04:44.419421  535908 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:04:44.419602  535908 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-770004 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1129 09:04:44.676972  535908 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:04:44.937468  535908 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:04:45.006216  535908 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:04:45.006310  535908 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:04:45.361869  535908 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:04:45.917661  535908 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:04:46.258293  535908 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:04:46.481004  535908 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:04:46.701923  535908 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:04:46.703182  535908 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:04:46.707422  535908 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:04:46.442096  540002 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:04:46.529258  540002 docker.go:234] disabling docker service ...
	I1129 09:04:46.529338  540002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:04:46.545918  540002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:04:46.561168  540002 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:04:46.647463  540002 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:04:46.751237  540002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:04:46.765273  540002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:04:46.783588  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1129 09:04:46.792988  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1129 09:04:46.802179  540002 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1129 09:04:46.802267  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1129 09:04:46.811443  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:04:46.820603  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1129 09:04:46.830371  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1129 09:04:46.840054  540002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:04:46.848884  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1129 09:04:46.859237  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1129 09:04:46.869258  540002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1129 09:04:46.880391  540002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:04:46.888972  540002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:04:46.896755  540002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:04:46.980432  540002 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1129 09:04:47.100069  540002 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1129 09:04:47.100149  540002 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1129 09:04:47.105820  540002 start.go:564] Will wait 60s for crictl version
	I1129 09:04:47.105896  540002 ssh_runner.go:195] Run: which crictl
	I1129 09:04:47.110369  540002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1129 09:04:47.138327  540002 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1129 09:04:47.138394  540002 ssh_runner.go:195] Run: containerd --version
	I1129 09:04:47.161808  540002 ssh_runner.go:195] Run: containerd --version
	I1129 09:04:47.187135  540002 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1129 09:04:47.188396  540002 cli_runner.go:164] Run: docker network inspect newest-cni-106601 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1129 09:04:47.207860  540002 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1129 09:04:47.213033  540002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:04:47.226913  540002 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1129 09:04:46.709619  535908 out.go:252]   - Booting up control plane ...
	I1129 09:04:46.709750  535908 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:04:46.709888  535908 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:04:46.710023  535908 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:04:46.727576  535908 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:04:46.727799  535908 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:04:46.735258  535908 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:04:46.735534  535908 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:04:46.735617  535908 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:04:46.847351  535908 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:04:46.847537  535908 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 09:04:47.349161  535908 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.922351ms
	I1129 09:04:47.352603  535908 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 09:04:47.352872  535908 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1129 09:04:47.353024  535908 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 09:04:47.353154  535908 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 09:04:47.228923  540002 kubeadm.go:884] updating cluster {Name:newest-cni-106601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-106601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:04:47.229104  540002 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 09:04:47.229178  540002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:04:47.257766  540002 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:04:47.257796  540002 containerd.go:534] Images already preloaded, skipping extraction
	I1129 09:04:47.257878  540002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:04:47.287866  540002 containerd.go:627] all images are preloaded for containerd runtime.
	I1129 09:04:47.287892  540002 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:04:47.287902  540002 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1129 09:04:47.288040  540002 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-106601 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-106601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:04:47.288118  540002 ssh_runner.go:195] Run: sudo crictl info
	I1129 09:04:47.316701  540002 cni.go:84] Creating CNI manager for ""
	I1129 09:04:47.316750  540002 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 09:04:47.316770  540002 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1129 09:04:47.316794  540002 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-106601 NodeName:newest-cni-106601 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:04:47.316913  540002 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-106601"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:04:47.316982  540002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:04:47.325866  540002 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:04:47.325934  540002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:04:47.334027  540002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1129 09:04:47.348124  540002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:04:47.363062  540002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1129 09:04:47.376378  540002 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1129 09:04:47.380213  540002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:04:47.390043  540002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:04:47.473518  540002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:04:47.502720  540002 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601 for IP: 192.168.94.2
	I1129 09:04:47.502777  540002 certs.go:195] generating shared ca certs ...
	I1129 09:04:47.502800  540002 certs.go:227] acquiring lock for ca certs: {Name:mk5e6bcae0a6944966b241f3c6197a472703c991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:47.502962  540002 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key
	I1129 09:04:47.503018  540002 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key
	I1129 09:04:47.503031  540002 certs.go:257] generating profile certs ...
	I1129 09:04:47.503139  540002 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/client.key
	I1129 09:04:47.503205  540002 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/apiserver.key.8f52e5f3
	I1129 09:04:47.503264  540002 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/proxy-client.key
	I1129 09:04:47.503407  540002 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem (1338 bytes)
	W1129 09:04:47.503447  540002 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483_empty.pem, impossibly tiny 0 bytes
	I1129 09:04:47.503458  540002 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:04:47.503487  540002 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/ca.pem (1078 bytes)
	I1129 09:04:47.503517  540002 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:04:47.503548  540002 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/certs/key.pem (1679 bytes)
	I1129 09:04:47.503603  540002 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem (1708 bytes)
	I1129 09:04:47.504327  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:04:47.524063  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:04:47.543390  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:04:47.566567  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:04:47.598366  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1129 09:04:47.631168  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:04:47.657875  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:04:47.682526  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/newest-cni-106601/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:04:47.707637  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:04:47.734081  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/certs/259483.pem --> /usr/share/ca-certificates/259483.pem (1338 bytes)
	I1129 09:04:47.757834  540002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/ssl/certs/2594832.pem --> /usr/share/ca-certificates/2594832.pem (1708 bytes)
	I1129 09:04:47.782247  540002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:04:47.802216  540002 ssh_runner.go:195] Run: openssl version
	I1129 09:04:47.812398  540002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:04:47.825424  540002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:04:47.832798  540002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:04:47.832883  540002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:04:47.884801  540002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:04:47.897659  540002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259483.pem && ln -fs /usr/share/ca-certificates/259483.pem /etc/ssl/certs/259483.pem"
	I1129 09:04:47.908023  540002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259483.pem
	I1129 09:04:47.912703  540002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:35 /usr/share/ca-certificates/259483.pem
	I1129 09:04:47.912801  540002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259483.pem
	I1129 09:04:47.950815  540002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/259483.pem /etc/ssl/certs/51391683.0"
	I1129 09:04:47.961598  540002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2594832.pem && ln -fs /usr/share/ca-certificates/2594832.pem /etc/ssl/certs/2594832.pem"
	I1129 09:04:47.970983  540002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2594832.pem
	I1129 09:04:47.975318  540002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:35 /usr/share/ca-certificates/2594832.pem
	I1129 09:04:47.975382  540002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2594832.pem
	I1129 09:04:48.011787  540002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2594832.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:04:48.020882  540002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:04:48.024937  540002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:04:48.077023  540002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:04:48.142309  540002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:04:48.207068  540002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:04:48.270185  540002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:04:48.341824  540002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:04:48.432505  540002 kubeadm.go:401] StartCluster: {Name:newest-cni-106601 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-106601 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:04:48.432687  540002 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1129 09:04:48.432837  540002 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:04:48.503789  540002 cri.go:89] found id: "1a34f3692687524428db0a630cd5941a36ca50fe3367ce64034d94caedadde8c"
	I1129 09:04:48.503833  540002 cri.go:89] found id: "9b642e260eb4b5d26d7f23ed36f23236cb26d277be6e7cfb9d24ed67d7106b31"
	I1129 09:04:48.503839  540002 cri.go:89] found id: "12da39af965a594afb3be832e4fed048c25f5674090740c7e705c538a56eae17"
	I1129 09:04:48.503844  540002 cri.go:89] found id: "2331d62583ac9f10550bb4eaba3340dab40c03f5218c02d08f64882f0b1c4efd"
	I1129 09:04:48.503848  540002 cri.go:89] found id: "0a0709f3a32f1172a488e884f16bb33e9710f74cb127ec39237d993fb318da36"
	I1129 09:04:48.503854  540002 cri.go:89] found id: "48611e4305372052385ada3c5cf83f207932d786f0e90456beba3b8d51dbbb05"
	I1129 09:04:48.503864  540002 cri.go:89] found id: "0381cce8327708e526ae49357c2734ae8e40ce6de1ebbdd6e6398ba6f1d47e24"
	I1129 09:04:48.503868  540002 cri.go:89] found id: "37e3444d9c250591ff98cfb50f85bcbc6ba13fdc0ce437b26555cf7379276ffb"
	I1129 09:04:48.503873  540002 cri.go:89] found id: "2f3ba633c7f133d99d8b4712f9a6b313e59011b9722432913fb9d0c1235c9549"
	I1129 09:04:48.503883  540002 cri.go:89] found id: "b10201d00508a9df4afa664712a9150d2cf98e3382751ec1f8ef0e585560090d"
	I1129 09:04:48.503887  540002 cri.go:89] found id: ""
	I1129 09:04:48.503953  540002 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1129 09:04:48.541414  540002 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"12da39af965a594afb3be832e4fed048c25f5674090740c7e705c538a56eae17","pid":965,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12da39af965a594afb3be832e4fed048c25f5674090740c7e705c538a56eae17","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12da39af965a594afb3be832e4fed048c25f5674090740c7e705c538a56eae17/rootfs","created":"2025-11-29T09:04:48.319264039Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"ce95e7374b8278188ca24b99d033fc80e6e2c033e081e21d65077346a5cca7b1","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ec937ace912df6c1bba8b2956c12b573"},"owner":"root"},{"ociVersion":"1.2.1","id":"1a34f3692687524428
db0a630cd5941a36ca50fe3367ce64034d94caedadde8c","pid":980,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a34f3692687524428db0a630cd5941a36ca50fe3367ce64034d94caedadde8c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a34f3692687524428db0a630cd5941a36ca50fe3367ce64034d94caedadde8c/rootfs","created":"2025-11-29T09:04:48.332463796Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2aff9f1628c54092dcee2cd221e4eb70"},"owner":"root"},{"ociVersion":"1.2.1","id":"2331d62583ac9f10550bb4eaba3340dab40c03f5218c02d08f64882f0b1c4efd","pid":916,"status":"running","
bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2331d62583ac9f10550bb4eaba3340dab40c03f5218c02d08f64882f0b1c4efd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2331d62583ac9f10550bb4eaba3340dab40c03f5218c02d08f64882f0b1c4efd/rootfs","created":"2025-11-29T09:04:48.259240777Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbcebe","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3a3159ecee55ca692de91698a24fc36e"},"owner":"root"},{"ociVersion":"1.2.1","id":"4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875","pid":862,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875","rootfs
":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875/rootfs","created":"2025-11-29T09:04:48.143996691Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-106601_2aff9f1628c54092dcee2cd221e4eb70","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2aff9f1628c54092dcee2cd221e4eb70"},"owner":"root"},{"ociVersion":"1.2.1","id":"8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbc
ebe","pid":793,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbcebe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbcebe/rootfs","created":"2025-11-29T09:04:48.097894402Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbcebe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-106601_3a3159ecee55ca692de91698a24fc36e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"3a3159ec
ee55ca692de91698a24fc36e"},"owner":"root"},{"ociVersion":"1.2.1","id":"9b642e260eb4b5d26d7f23ed36f23236cb26d277be6e7cfb9d24ed67d7106b31","pid":971,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b642e260eb4b5d26d7f23ed36f23236cb26d277be6e7cfb9d24ed67d7106b31","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b642e260eb4b5d26d7f23ed36f23236cb26d277be6e7cfb9d24ed67d7106b31/rootfs","created":"2025-11-29T09:04:48.347690228Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"589cd80f21a21d0fdb9074d648368f4c"},"owner":"root"},{"ociVersion":"1.2.1","id":"ce95e7374b8278188ca24b99d033fc80e6
e2c033e081e21d65077346a5cca7b1","pid":855,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce95e7374b8278188ca24b99d033fc80e6e2c033e081e21d65077346a5cca7b1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce95e7374b8278188ca24b99d033fc80e6e2c033e081e21d65077346a5cca7b1/rootfs","created":"2025-11-29T09:04:48.136298178Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"ce95e7374b8278188ca24b99d033fc80e6e2c033e081e21d65077346a5cca7b1","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-106601_ec937ace912df6c1bba8b2956c12b573","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-sy
stem","io.kubernetes.cri.sandbox-uid":"ec937ace912df6c1bba8b2956c12b573"},"owner":"root"},{"ociVersion":"1.2.1","id":"d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77","pid":866,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77/rootfs","created":"2025-11-29T09:04:48.153663888Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-106601_589cd80f21a21d0fdb9074d648368f4c","
io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-106601","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"589cd80f21a21d0fdb9074d648368f4c"},"owner":"root"}]
	I1129 09:04:48.541609  540002 cri.go:126] list returned 8 containers
	I1129 09:04:48.541622  540002 cri.go:129] container: {ID:12da39af965a594afb3be832e4fed048c25f5674090740c7e705c538a56eae17 Status:running}
	I1129 09:04:48.541656  540002 cri.go:135] skipping {12da39af965a594afb3be832e4fed048c25f5674090740c7e705c538a56eae17 running}: state = "running", want "paused"
	I1129 09:04:48.541668  540002 cri.go:129] container: {ID:1a34f3692687524428db0a630cd5941a36ca50fe3367ce64034d94caedadde8c Status:running}
	I1129 09:04:48.541675  540002 cri.go:135] skipping {1a34f3692687524428db0a630cd5941a36ca50fe3367ce64034d94caedadde8c running}: state = "running", want "paused"
	I1129 09:04:48.541682  540002 cri.go:129] container: {ID:2331d62583ac9f10550bb4eaba3340dab40c03f5218c02d08f64882f0b1c4efd Status:running}
	I1129 09:04:48.541690  540002 cri.go:135] skipping {2331d62583ac9f10550bb4eaba3340dab40c03f5218c02d08f64882f0b1c4efd running}: state = "running", want "paused"
	I1129 09:04:48.541696  540002 cri.go:129] container: {ID:4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875 Status:running}
	I1129 09:04:48.541705  540002 cri.go:131] skipping 4ea5e68b0efcfa0b1fba7653ede7aad11198b1359923f103f490b002316a5875 - not in ps
	I1129 09:04:48.541710  540002 cri.go:129] container: {ID:8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbcebe Status:running}
	I1129 09:04:48.541715  540002 cri.go:131] skipping 8552d39438ce8d11ca8f8a5435fd73f831bb2c0f16690406ae3c882f10dbcebe - not in ps
	I1129 09:04:48.541721  540002 cri.go:129] container: {ID:9b642e260eb4b5d26d7f23ed36f23236cb26d277be6e7cfb9d24ed67d7106b31 Status:running}
	I1129 09:04:48.541744  540002 cri.go:135] skipping {9b642e260eb4b5d26d7f23ed36f23236cb26d277be6e7cfb9d24ed67d7106b31 running}: state = "running", want "paused"
	I1129 09:04:48.541755  540002 cri.go:129] container: {ID:ce95e7374b8278188ca24b99d033fc80e6e2c033e081e21d65077346a5cca7b1 Status:running}
	I1129 09:04:48.541763  540002 cri.go:131] skipping ce95e7374b8278188ca24b99d033fc80e6e2c033e081e21d65077346a5cca7b1 - not in ps
	I1129 09:04:48.541767  540002 cri.go:129] container: {ID:d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77 Status:running}
	I1129 09:04:48.541774  540002 cri.go:131] skipping d74d2e3ba1383af19996948562d0ac1cfcf9fdb7fa9f4f090fd20efb02f69b77 - not in ps
	I1129 09:04:48.541826  540002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:04:48.561896  540002 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:04:48.561923  540002 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:04:48.561971  540002 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:04:48.586587  540002 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:04:48.588263  540002 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-106601" does not appear in /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:04:48.589069  540002 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-255825/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-106601" cluster setting kubeconfig missing "newest-cni-106601" context setting]
	I1129 09:04:48.590377  540002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:48.595894  540002 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:04:48.610984  540002 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1129 09:04:48.611158  540002 kubeadm.go:602] duration metric: took 49.225267ms to restartPrimaryControlPlane
	I1129 09:04:48.611205  540002 kubeadm.go:403] duration metric: took 178.728899ms to StartCluster
	I1129 09:04:48.611229  540002 settings.go:142] acquiring lock: {Name:mk6dbed29e5e99d89b1cbbd9e561d8f8791ae9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:48.611308  540002 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:04:48.613488  540002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/kubeconfig: {Name:mk7d91966efd00ccef892cf02f31ec14469accbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:04:48.613783  540002 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1129 09:04:48.614053  540002 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:04:48.614159  540002 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-106601"
	I1129 09:04:48.614179  540002 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-106601"
	W1129 09:04:48.614188  540002 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:04:48.614219  540002 host.go:66] Checking if "newest-cni-106601" exists ...
	I1129 09:04:48.614276  540002 config.go:182] Loaded profile config "newest-cni-106601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:04:48.614331  540002 addons.go:70] Setting default-storageclass=true in profile "newest-cni-106601"
	I1129 09:04:48.614345  540002 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-106601"
	I1129 09:04:48.614641  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:48.614749  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:48.614890  540002 addons.go:70] Setting dashboard=true in profile "newest-cni-106601"
	I1129 09:04:48.614905  540002 addons.go:70] Setting metrics-server=true in profile "newest-cni-106601"
	I1129 09:04:48.614915  540002 addons.go:239] Setting addon dashboard=true in "newest-cni-106601"
	I1129 09:04:48.614924  540002 addons.go:239] Setting addon metrics-server=true in "newest-cni-106601"
	W1129 09:04:48.614934  540002 addons.go:248] addon metrics-server should already be in state true
	I1129 09:04:48.614961  540002 host.go:66] Checking if "newest-cni-106601" exists ...
	W1129 09:04:48.614967  540002 addons.go:248] addon dashboard should already be in state true
	I1129 09:04:48.615000  540002 host.go:66] Checking if "newest-cni-106601" exists ...
	I1129 09:04:48.615405  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:48.615611  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:48.615889  540002 out.go:179] * Verifying Kubernetes components...
	I1129 09:04:48.618636  540002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:04:48.644501  540002 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1129 09:04:48.646523  540002 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1129 09:04:48.647541  540002 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1129 09:04:48.647608  540002 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1129 09:04:48.647708  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:48.660841  540002 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:04:48.661029  540002 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1129 09:04:48.661375  540002 addons.go:239] Setting addon default-storageclass=true in "newest-cni-106601"
	W1129 09:04:48.662249  540002 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:04:48.662297  540002 host.go:66] Checking if "newest-cni-106601" exists ...
	I1129 09:04:48.662765  540002 cli_runner.go:164] Run: docker container inspect newest-cni-106601 --format={{.State.Status}}
	I1129 09:04:48.662028  540002 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1129 09:04:48.663802  540002 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1129 09:04:48.663882  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:48.662105  540002 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:04:48.664164  540002 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:04:48.664211  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:48.686721  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:48.699446  540002 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:04:48.699472  540002 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:04:48.699533  540002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-106601
	I1129 09:04:48.703943  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:48.725416  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:48.753060  540002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/newest-cni-106601/id_rsa Username:docker}
	I1129 09:04:48.901039  540002 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1129 09:04:48.901070  540002 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1129 09:04:48.948085  540002 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1129 09:04:48.948109  540002 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1129 09:04:48.976716  540002 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1129 09:04:48.976757  540002 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1129 09:04:49.014474  540002 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1129 09:04:49.014498  540002 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1129 09:04:49.029488  540002 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1129 09:04:49.029853  540002 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1129 09:04:49.035846  540002 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1129 09:04:49.035926  540002 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1129 09:04:49.044889  540002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:04:49.046424  540002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:04:49.065755  540002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:04:49.099375  540002 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:04:49.099507  540002 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1129 09:04:49.113713  540002 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1129 09:04:49.113756  540002 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1129 09:04:49.163824  540002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 09:04:49.208317  540002 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1129 09:04:49.208346  540002 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1129 09:04:49.288712  540002 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1129 09:04:49.288982  540002 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1129 09:04:49.327546  540002 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1129 09:04:49.327584  540002 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1129 09:04:49.351687  540002 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:04:49.351713  540002 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1129 09:04:49.392072  540002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1129 09:04:51.487405  540002 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.440955285s)
	I1129 09:04:51.487475  540002 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:04:51.487557  540002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:04:51.487405  540002 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.442441553s)
	I1129 09:04:51.489195  540002 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.421773152s)
	I1129 09:04:51.509618  540002 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.117491398s)
	I1129 09:04:51.510139  540002 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.346266894s)
	I1129 09:04:51.510317  540002 addons.go:495] Verifying addon metrics-server=true in "newest-cni-106601"
	I1129 09:04:51.510337  540002 api_server.go:72] duration metric: took 2.896515164s to wait for apiserver process to appear ...
	I1129 09:04:51.510395  540002 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:04:51.510419  540002 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1129 09:04:51.512079  540002 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-106601 addons enable metrics-server
	
	I1129 09:04:51.514500  540002 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	4d1c856805d0a       56cc512116c8f       10 seconds ago      Running             busybox                   0                   439301fd61641       busybox                                                default
	66a3ae80c6174       52546a367cc9e       17 seconds ago      Running             coredns                   0                   567152cbf13bf       coredns-66bc5c9577-d7vmg                               kube-system
	a84a625c10a66       6e38f40d628db       17 seconds ago      Running             storage-provisioner       0                   86786bfc75566       storage-provisioner                                    kube-system
	634267a48c9ee       409467f978b4a       28 seconds ago      Running             kindnet-cni               0                   85e39fdd58596       kindnet-g5whk                                          kube-system
	7383b28a1b358       fc25172553d79       28 seconds ago      Running             kube-proxy                0                   c1a0327d519fc       kube-proxy-v9bbz                                       kube-system
	3effd19c5883f       5f1f5298c888d       39 seconds ago      Running             etcd                      0                   b1375bd22fe4c       etcd-default-k8s-diff-port-357829                      kube-system
	1018519011733       c3994bc696102       39 seconds ago      Running             kube-apiserver            0                   75476fab535a6       kube-apiserver-default-k8s-diff-port-357829            kube-system
	2a2e1928a205a       7dd6aaa1717ab       39 seconds ago      Running             kube-scheduler            0                   c5edcc06db6c2       kube-scheduler-default-k8s-diff-port-357829            kube-system
	30faae14a64ae       c80c8dbafe7dd       39 seconds ago      Running             kube-controller-manager   0                   2a6c01d6e1319       kube-controller-manager-default-k8s-diff-port-357829   kube-system
	
	
	==> containerd <==
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.909847368Z" level=info msg="StartContainer for \"a84a625c10a66eca43ad40359036d8f8bae7f97fdb8d57d903806a13bdd7de2d\""
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.912486134Z" level=info msg="connecting to shim a84a625c10a66eca43ad40359036d8f8bae7f97fdb8d57d903806a13bdd7de2d" address="unix:///run/containerd/s/668b6d16f551d3ab4b9d1881ee008512f38b3b8dbcc0ba011d854a44b74662b0" protocol=ttrpc version=3
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.921379893Z" level=info msg="CreateContainer within sandbox \"567152cbf13bff4c1d14dd2112fcd3e28303ca49c4e7030f50dd073b50549f88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.929829109Z" level=info msg="Container 66a3ae80c61746658348d60a62eab2930b5ad08cf7a1c909a1439060cdd1cdd7: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.938326164Z" level=info msg="CreateContainer within sandbox \"567152cbf13bff4c1d14dd2112fcd3e28303ca49c4e7030f50dd073b50549f88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"66a3ae80c61746658348d60a62eab2930b5ad08cf7a1c909a1439060cdd1cdd7\""
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.939723621Z" level=info msg="StartContainer for \"66a3ae80c61746658348d60a62eab2930b5ad08cf7a1c909a1439060cdd1cdd7\""
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.942663414Z" level=info msg="connecting to shim 66a3ae80c61746658348d60a62eab2930b5ad08cf7a1c909a1439060cdd1cdd7" address="unix:///run/containerd/s/42fd1cb6f027819fd08220fa4a3d5c5af17174c387922f3a55bf6c8b2d55a665" protocol=ttrpc version=3
	Nov 29 09:04:34 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:34.982196209Z" level=info msg="StartContainer for \"a84a625c10a66eca43ad40359036d8f8bae7f97fdb8d57d903806a13bdd7de2d\" returns successfully"
	Nov 29 09:04:35 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:35.012336135Z" level=info msg="StartContainer for \"66a3ae80c61746658348d60a62eab2930b5ad08cf7a1c909a1439060cdd1cdd7\" returns successfully"
	Nov 29 09:04:38 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:38.513802565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:a7187d53-caa5-4d82-a363-42dacbd45f01,Namespace:default,Attempt:0,}"
	Nov 29 09:04:38 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:38.552838921Z" level=info msg="connecting to shim 439301fd61641c2775c93c0f05d90c3ccbcf251a873c2c816b228c3de587e2f7" address="unix:///run/containerd/s/b878ca3f6f91937f9be62e602753711ebfd091f1a7aebb1d4bc44f7db49c49de" namespace=k8s.io protocol=ttrpc version=3
	Nov 29 09:04:38 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:38.635176176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:a7187d53-caa5-4d82-a363-42dacbd45f01,Namespace:default,Attempt:0,} returns sandbox id \"439301fd61641c2775c93c0f05d90c3ccbcf251a873c2c816b228c3de587e2f7\""
	Nov 29 09:04:38 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:38.638695027Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.319501915Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.320676365Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396647"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.321872804Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.324015337Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.324719708Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.685903014s"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.324784343Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.328928236Z" level=info msg="CreateContainer within sandbox \"439301fd61641c2775c93c0f05d90c3ccbcf251a873c2c816b228c3de587e2f7\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.336417910Z" level=info msg="Container 4d1c856805d0aa751125afa04694d2d9343c8904fcdff215566e5c873c81af57: CDI devices from CRI Config.CDIDevices: []"
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.341768402Z" level=info msg="CreateContainer within sandbox \"439301fd61641c2775c93c0f05d90c3ccbcf251a873c2c816b228c3de587e2f7\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"4d1c856805d0aa751125afa04694d2d9343c8904fcdff215566e5c873c81af57\""
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.342354060Z" level=info msg="StartContainer for \"4d1c856805d0aa751125afa04694d2d9343c8904fcdff215566e5c873c81af57\""
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.343091405Z" level=info msg="connecting to shim 4d1c856805d0aa751125afa04694d2d9343c8904fcdff215566e5c873c81af57" address="unix:///run/containerd/s/b878ca3f6f91937f9be62e602753711ebfd091f1a7aebb1d4bc44f7db49c49de" protocol=ttrpc version=3
	Nov 29 09:04:41 default-k8s-diff-port-357829 containerd[664]: time="2025-11-29T09:04:41.411995510Z" level=info msg="StartContainer for \"4d1c856805d0aa751125afa04694d2d9343c8904fcdff215566e5c873c81af57\" returns successfully"
	
	
	==> coredns [66a3ae80c61746658348d60a62eab2930b5ad08cf7a1c909a1439060cdd1cdd7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37370 - 56095 "HINFO IN 1396913741126626310.2881560545536060347. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033848092s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-357829
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-357829
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=default-k8s-diff-port-357829
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_04_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:04:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-357829
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:04:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:04:48 +0000   Sat, 29 Nov 2025 09:04:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:04:48 +0000   Sat, 29 Nov 2025 09:04:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:04:48 +0000   Sat, 29 Nov 2025 09:04:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:04:48 +0000   Sat, 29 Nov 2025 09:04:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-357829
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                c7cf2208-b787-4439-9b47-54475ca3d04f
	  Boot ID:                    b81dce2f-73d5-4349-b473-aa1210058cb8
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-d7vmg                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-default-k8s-diff-port-357829                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-g5whk                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-default-k8s-diff-port-357829             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-357829    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-v9bbz                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-default-k8s-diff-port-357829             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  Starting                 35s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  35s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  35s   kubelet          Node default-k8s-diff-port-357829 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s   kubelet          Node default-k8s-diff-port-357829 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s   kubelet          Node default-k8s-diff-port-357829 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node default-k8s-diff-port-357829 event: Registered Node default-k8s-diff-port-357829 in Controller
	  Normal  NodeReady                18s   kubelet          Node default-k8s-diff-port-357829 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov29 07:17] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001881] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084003] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.378167] i8042: Warning: Keylock active
	[  +0.012106] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.460417] block sda: the capability attribute has been deprecated.
	[  +0.079627] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021012] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.285522] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [3effd19c5883f9175a3107ccf1521e283880d674cd323abfdc755cebd4249c98] <==
	{"level":"warn","ts":"2025-11-29T09:04:14.286040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:14.299320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:14.306130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:14.325178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:14.332053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:14.339984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:14.384999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:04:21.166191Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.788993ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790340320339420 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/disruption-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/disruption-controller\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-29T09:04:21.166367Z","caller":"traceutil/trace.go:172","msg":"trace[1875009778] transaction","detail":"{read_only:false; response_revision:293; number_of_response:1; }","duration":"198.299577ms","start":"2025-11-29T09:04:20.968040Z","end":"2025-11-29T09:04:21.166339Z","steps":["trace[1875009778] 'process raft request'  (duration: 58.968887ms)","trace[1875009778] 'compare'  (duration: 138.643269ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-29T09:04:21.621068Z","caller":"traceutil/trace.go:172","msg":"trace[518274994] transaction","detail":"{read_only:false; response_revision:295; number_of_response:1; }","duration":"252.669049ms","start":"2025-11-29T09:04:21.368375Z","end":"2025-11-29T09:04:21.621045Z","steps":["trace[518274994] 'process raft request'  (duration: 252.540523ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:04:21.894100Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"269.958082ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-11-29T09:04:21.894169Z","caller":"traceutil/trace.go:172","msg":"trace[1295694699] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:295; }","duration":"270.043585ms","start":"2025-11-29T09:04:21.624109Z","end":"2025-11-29T09:04:21.894152Z","steps":["trace[1295694699] 'range keys from in-memory index tree'  (duration: 269.791727ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:04:22.144475Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.360675ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790340320339437 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-29T09:04:22.144700Z","caller":"traceutil/trace.go:172","msg":"trace[998990797] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"181.532983ms","start":"2025-11-29T09:04:21.963151Z","end":"2025-11-29T09:04:22.144684Z","steps":["trace[998990797] 'process raft request'  (duration: 53.909406ms)","trace[998990797] 'compare'  (duration: 127.236145ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-29T09:04:22.413146Z","caller":"traceutil/trace.go:172","msg":"trace[1094955470] transaction","detail":"{read_only:false; response_revision:303; number_of_response:1; }","duration":"136.357749ms","start":"2025-11-29T09:04:22.276767Z","end":"2025-11-29T09:04:22.413125Z","steps":["trace[1094955470] 'process raft request'  (duration: 136.301935ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:22.413339Z","caller":"traceutil/trace.go:172","msg":"trace[2041338908] transaction","detail":"{read_only:false; response_revision:302; number_of_response:1; }","duration":"138.342577ms","start":"2025-11-29T09:04:22.274974Z","end":"2025-11-29T09:04:22.413317Z","steps":["trace[2041338908] 'process raft request'  (duration: 137.932957ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:04:36.758138Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.915899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T09:04:36.758218Z","caller":"traceutil/trace.go:172","msg":"trace[275767652] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:419; }","duration":"184.010852ms","start":"2025-11-29T09:04:36.574189Z","end":"2025-11-29T09:04:36.758200Z","steps":["trace[275767652] 'agreement among raft nodes before linearized reading'  (duration: 54.241902ms)","trace[275767652] 'range keys from in-memory index tree'  (duration: 129.626691ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:04:36.758440Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.703317ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790340320339752 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.103.2\" mod_revision:389 > success:<request_put:<key:\"/registry/masterleases/192.168.103.2\" value_size:66 lease:4650418303465563942 >> failure:<request_range:<key:\"/registry/masterleases/192.168.103.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-29T09:04:36.758525Z","caller":"traceutil/trace.go:172","msg":"trace[1852514116] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"254.84566ms","start":"2025-11-29T09:04:36.503665Z","end":"2025-11-29T09:04:36.758511Z","steps":["trace[1852514116] 'process raft request'  (duration: 124.804294ms)","trace[1852514116] 'compare'  (duration: 129.604883ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:04:37.104066Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.146215ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T09:04:37.104150Z","caller":"traceutil/trace.go:172","msg":"trace[1155203678] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:420; }","duration":"154.253141ms","start":"2025-11-29T09:04:36.949879Z","end":"2025-11-29T09:04:37.104132Z","steps":["trace[1155203678] 'range keys from in-memory index tree'  (duration: 154.096983ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:04:37.104158Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"130.890411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-357829\" limit:1 ","response":"range_response_count:1 size:4532"}
	{"level":"info","ts":"2025-11-29T09:04:37.104207Z","caller":"traceutil/trace.go:172","msg":"trace[912961907] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-357829; range_end:; response_count:1; response_revision:420; }","duration":"130.95116ms","start":"2025-11-29T09:04:36.973245Z","end":"2025-11-29T09:04:37.104196Z","steps":["trace[912961907] 'range keys from in-memory index tree'  (duration: 130.725599ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:04:37.232523Z","caller":"traceutil/trace.go:172","msg":"trace[1297352301] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"121.783541ms","start":"2025-11-29T09:04:37.110720Z","end":"2025-11-29T09:04:37.232504Z","steps":["trace[1297352301] 'process raft request'  (duration: 121.632424ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:04:52 up  1:47,  0 user,  load average: 6.38, 3.92, 11.44
	Linux default-k8s-diff-port-357829 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [634267a48c9ee4a113f706b11c4923aa743934332d4a645040da54c768f74ea1] <==
	I1129 09:04:24.086389       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1129 09:04:24.086728       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1129 09:04:24.086942       1 main.go:148] setting mtu 1500 for CNI 
	I1129 09:04:24.086966       1 main.go:178] kindnetd IP family: "ipv4"
	I1129 09:04:24.086981       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-29T09:04:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1129 09:04:24.386215       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1129 09:04:24.386242       1 controller.go:381] "Waiting for informer caches to sync"
	I1129 09:04:24.386255       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1129 09:04:24.449870       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1129 09:04:24.786380       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1129 09:04:24.786426       1 metrics.go:72] Registering metrics
	I1129 09:04:24.786523       1 controller.go:711] "Syncing nftables rules"
	I1129 09:04:34.387272       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:04:34.387342       1 main.go:301] handling current node
	I1129 09:04:44.386061       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1129 09:04:44.386100       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1018519011733056917c1040d66f2f3b50adbe41e935b8e5e3a77ad04a4f2cec] <==
	E1129 09:04:14.946979       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1129 09:04:14.994469       1 controller.go:667] quota admission added evaluator for: namespaces
	I1129 09:04:14.997436       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:14.997495       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1129 09:04:15.001721       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:15.001766       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:04:15.086989       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:04:15.797278       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1129 09:04:15.801333       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1129 09:04:15.801350       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:04:16.347002       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:04:16.385850       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:04:16.504133       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1129 09:04:16.510527       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1129 09:04:16.511843       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:04:16.517264       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1129 09:04:16.820937       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:04:17.568388       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:04:17.585645       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1129 09:04:17.595141       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:04:22.420877       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:04:22.421650       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:22.428123       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:04:22.825109       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1129 09:04:48.392923       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:39430: use of closed network connection
	
	
	==> kube-controller-manager [30faae14a64ae82b07ed17cc7e4d78756201313e32105c0c66064b8bcc62bc83] <==
	I1129 09:04:22.222665       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1129 09:04:22.222774       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1129 09:04:22.223870       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:04:22.223966       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:04:22.224063       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1129 09:04:22.227214       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:04:22.232368       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:04:22.240832       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:04:22.247272       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:04:22.247362       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1129 09:04:22.253582       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:04:22.258800       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1129 09:04:22.262291       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-357829" podCIDRs=["10.244.0.0/24"]
	I1129 09:04:22.270065       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:04:22.270191       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:04:22.270228       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1129 09:04:22.270288       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1129 09:04:22.270813       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:04:22.273056       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:04:22.273542       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:04:22.275027       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:04:22.275174       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:04:22.275331       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-357829"
	I1129 09:04:22.275428       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1129 09:04:37.277596       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7383b28a1b35820a1e07133341cbb9130ac641d77de659266bcd4ac2296264e9] <==
	I1129 09:04:23.529937       1 server_linux.go:53] "Using iptables proxy"
	I1129 09:04:23.617952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:04:23.718513       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:04:23.718567       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1129 09:04:23.718693       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:04:23.748087       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1129 09:04:23.748147       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:04:23.756410       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:04:23.757352       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:04:23.757747       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:04:23.760667       1 config.go:200] "Starting service config controller"
	I1129 09:04:23.760782       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:04:23.760694       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:04:23.760806       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:04:23.760929       1 config.go:309] "Starting node config controller"
	I1129 09:04:23.760945       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:04:23.760723       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:04:23.763727       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:04:23.765878       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:04:23.861230       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:04:23.861252       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:04:23.861230       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [2a2e1928a205a6f671a9d953f408cc7a51eec7b6e0e412ec88c2b9238beb6579] <==
	E1129 09:04:14.856341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:04:14.856812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:04:14.856859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:04:14.856989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:04:14.857090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:04:14.857188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:04:14.857883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 09:04:14.857912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:04:14.857967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:04:14.858299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:04:14.858353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:04:14.858310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:04:15.696901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:04:15.720319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:04:15.741496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:04:15.742325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:04:15.783296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:04:15.810461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:04:15.902316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:04:15.903213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:04:16.045558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 09:04:16.062054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:04:16.090151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:04:16.100776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1129 09:04:17.752776       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:04:18 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:18.497769    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-357829" podStartSLOduration=1.497724054 podStartE2EDuration="1.497724054s" podCreationTimestamp="2025-11-29 09:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:18.488337798 +0000 UTC m=+1.143701825" watchObservedRunningTime="2025-11-29 09:04:18.497724054 +0000 UTC m=+1.153088082"
	Nov 29 09:04:18 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:18.497921    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-357829" podStartSLOduration=2.497911863 podStartE2EDuration="2.497911863s" podCreationTimestamp="2025-11-29 09:04:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:18.497701309 +0000 UTC m=+1.153065337" watchObservedRunningTime="2025-11-29 09:04:18.497911863 +0000 UTC m=+1.153275890"
	Nov 29 09:04:18 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:18.523513    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-357829" podStartSLOduration=1.523492159 podStartE2EDuration="1.523492159s" podCreationTimestamp="2025-11-29 09:04:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:18.510562949 +0000 UTC m=+1.165926974" watchObservedRunningTime="2025-11-29 09:04:18.523492159 +0000 UTC m=+1.178856187"
	Nov 29 09:04:18 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:18.535948    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-357829" podStartSLOduration=3.535930395 podStartE2EDuration="3.535930395s" podCreationTimestamp="2025-11-29 09:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:18.523930739 +0000 UTC m=+1.179294767" watchObservedRunningTime="2025-11-29 09:04:18.535930395 +0000 UTC m=+1.191294423"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.315595    1421 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.317154    1421 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.865831    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxx59\" (UniqueName: \"kubernetes.io/projected/6a515c70-840f-41c2-b1e4-6de13b23e5f3-kube-api-access-qxx59\") pod \"kube-proxy-v9bbz\" (UID: \"6a515c70-840f-41c2-b1e4-6de13b23e5f3\") " pod="kube-system/kube-proxy-v9bbz"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.865884    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a515c70-840f-41c2-b1e4-6de13b23e5f3-lib-modules\") pod \"kube-proxy-v9bbz\" (UID: \"6a515c70-840f-41c2-b1e4-6de13b23e5f3\") " pod="kube-system/kube-proxy-v9bbz"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.865917    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a515c70-840f-41c2-b1e4-6de13b23e5f3-kube-proxy\") pod \"kube-proxy-v9bbz\" (UID: \"6a515c70-840f-41c2-b1e4-6de13b23e5f3\") " pod="kube-system/kube-proxy-v9bbz"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.865941    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a515c70-840f-41c2-b1e4-6de13b23e5f3-xtables-lock\") pod \"kube-proxy-v9bbz\" (UID: \"6a515c70-840f-41c2-b1e4-6de13b23e5f3\") " pod="kube-system/kube-proxy-v9bbz"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.967208    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgj94\" (UniqueName: \"kubernetes.io/projected/5563c069-5b20-4835-941c-48eb3b04c051-kube-api-access-bgj94\") pod \"kindnet-g5whk\" (UID: \"5563c069-5b20-4835-941c-48eb3b04c051\") " pod="kube-system/kindnet-g5whk"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.967623    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5563c069-5b20-4835-941c-48eb3b04c051-lib-modules\") pod \"kindnet-g5whk\" (UID: \"5563c069-5b20-4835-941c-48eb3b04c051\") " pod="kube-system/kindnet-g5whk"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.967695    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5563c069-5b20-4835-941c-48eb3b04c051-xtables-lock\") pod \"kindnet-g5whk\" (UID: \"5563c069-5b20-4835-941c-48eb3b04c051\") " pod="kube-system/kindnet-g5whk"
	Nov 29 09:04:22 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:22.967744    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5563c069-5b20-4835-941c-48eb3b04c051-cni-cfg\") pod \"kindnet-g5whk\" (UID: \"5563c069-5b20-4835-941c-48eb3b04c051\") " pod="kube-system/kindnet-g5whk"
	Nov 29 09:04:24 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:24.491925    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v9bbz" podStartSLOduration=2.491903092 podStartE2EDuration="2.491903092s" podCreationTimestamp="2025-11-29 09:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:24.491601661 +0000 UTC m=+7.146965690" watchObservedRunningTime="2025-11-29 09:04:24.491903092 +0000 UTC m=+7.147267120"
	Nov 29 09:04:24 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:24.502218    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-g5whk" podStartSLOduration=2.502192331 podStartE2EDuration="2.502192331s" podCreationTimestamp="2025-11-29 09:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:24.501891007 +0000 UTC m=+7.157255036" watchObservedRunningTime="2025-11-29 09:04:24.502192331 +0000 UTC m=+7.157556360"
	Nov 29 09:04:34 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:34.427252    1421 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:04:34 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:34.560153    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cgmt\" (UniqueName: \"kubernetes.io/projected/d9aa47c6-1005-4a91-a986-819f21c0cfda-kube-api-access-8cgmt\") pod \"storage-provisioner\" (UID: \"d9aa47c6-1005-4a91-a986-819f21c0cfda\") " pod="kube-system/storage-provisioner"
	Nov 29 09:04:34 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:34.560223    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ebe88f4-4c20-4523-8642-f54615c1f605-config-volume\") pod \"coredns-66bc5c9577-d7vmg\" (UID: \"4ebe88f4-4c20-4523-8642-f54615c1f605\") " pod="kube-system/coredns-66bc5c9577-d7vmg"
	Nov 29 09:04:34 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:34.560250    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg92d\" (UniqueName: \"kubernetes.io/projected/4ebe88f4-4c20-4523-8642-f54615c1f605-kube-api-access-mg92d\") pod \"coredns-66bc5c9577-d7vmg\" (UID: \"4ebe88f4-4c20-4523-8642-f54615c1f605\") " pod="kube-system/coredns-66bc5c9577-d7vmg"
	Nov 29 09:04:34 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:34.560353    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d9aa47c6-1005-4a91-a986-819f21c0cfda-tmp\") pod \"storage-provisioner\" (UID: \"d9aa47c6-1005-4a91-a986-819f21c0cfda\") " pod="kube-system/storage-provisioner"
	Nov 29 09:04:35 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:35.541121    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d7vmg" podStartSLOduration=12.541098367 podStartE2EDuration="12.541098367s" podCreationTimestamp="2025-11-29 09:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:35.529794717 +0000 UTC m=+18.185158744" watchObservedRunningTime="2025-11-29 09:04:35.541098367 +0000 UTC m=+18.196462398"
	Nov 29 09:04:35 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:35.541254    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.541246386 podStartE2EDuration="12.541246386s" podCreationTimestamp="2025-11-29 09:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-29 09:04:35.540644121 +0000 UTC m=+18.196008145" watchObservedRunningTime="2025-11-29 09:04:35.541246386 +0000 UTC m=+18.196610414"
	Nov 29 09:04:38 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:38.285926    1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kkmd\" (UniqueName: \"kubernetes.io/projected/a7187d53-caa5-4d82-a363-42dacbd45f01-kube-api-access-4kkmd\") pod \"busybox\" (UID: \"a7187d53-caa5-4d82-a363-42dacbd45f01\") " pod="default/busybox"
	Nov 29 09:04:41 default-k8s-diff-port-357829 kubelet[1421]: I1129 09:04:41.550469    1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.862618884 podStartE2EDuration="3.550445582s" podCreationTimestamp="2025-11-29 09:04:38 +0000 UTC" firstStartedPulling="2025-11-29 09:04:38.637975097 +0000 UTC m=+21.293339104" lastFinishedPulling="2025-11-29 09:04:41.325801795 +0000 UTC m=+23.981165802" observedRunningTime="2025-11-29 09:04:41.550305968 +0000 UTC m=+24.205669996" watchObservedRunningTime="2025-11-29 09:04:41.550445582 +0000 UTC m=+24.205809611"
	
	
	==> storage-provisioner [a84a625c10a66eca43ad40359036d8f8bae7f97fdb8d57d903806a13bdd7de2d] <==
	I1129 09:04:35.006308       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1129 09:04:35.009364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:35.015713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:04:35.016055       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1129 09:04:35.016415       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-357829_b6a1c520-de9d-494e-adbf-4b6205489313!
	I1129 09:04:35.016648       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d490b20-7a86-4524-bb18-37c00fb6dca1", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-357829_b6a1c520-de9d-494e-adbf-4b6205489313 became leader
	W1129 09:04:35.021349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:35.025551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1129 09:04:35.117019       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-357829_b6a1c520-de9d-494e-adbf-4b6205489313!
	W1129 09:04:37.105354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:37.233774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:39.237067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:39.241532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:41.244603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:41.250904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:43.254461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:43.259439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:45.263604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:45.268407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:47.272639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:47.279094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:49.282777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:49.290195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:51.295202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 09:04:51.302595       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-357829 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.42s)

                                                
                                    

Test pass (303/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 17.67
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 13.11
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.24
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.16
20 TestDownloadOnlyKic 0.44
21 TestBinaryMirror 0.87
22 TestOffline 57.53
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 129.62
29 TestAddons/serial/Volcano 40.14
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 10.46
35 TestAddons/parallel/Registry 15.43
36 TestAddons/parallel/RegistryCreds 0.7
37 TestAddons/parallel/Ingress 21.11
38 TestAddons/parallel/InspektorGadget 10.74
39 TestAddons/parallel/MetricsServer 5.73
41 TestAddons/parallel/CSI 52.81
42 TestAddons/parallel/Headlamp 17.57
43 TestAddons/parallel/CloudSpanner 5.61
44 TestAddons/parallel/LocalPath 55.73
45 TestAddons/parallel/NvidiaDevicePlugin 5.62
46 TestAddons/parallel/Yakd 10.76
47 TestAddons/parallel/AmdGpuDevicePlugin 5.63
48 TestAddons/StoppedEnableDisable 12.6
49 TestCertOptions 23.56
50 TestCertExpiration 214.3
52 TestForceSystemdFlag 42.44
53 TestForceSystemdEnv 27.55
54 TestDockerEnvContainerd 39.94
58 TestErrorSpam/setup 19.66
59 TestErrorSpam/start 0.68
60 TestErrorSpam/status 0.97
61 TestErrorSpam/pause 1.46
62 TestErrorSpam/unpause 1.55
63 TestErrorSpam/stop 2.08
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 39.57
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.87
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.42
75 TestFunctional/serial/CacheCmd/cache/add_local 2.01
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 41.62
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.2
86 TestFunctional/serial/LogsFileCmd 1.23
87 TestFunctional/serial/InvalidService 3.97
89 TestFunctional/parallel/ConfigCmd 0.56
90 TestFunctional/parallel/DashboardCmd 8.56
91 TestFunctional/parallel/DryRun 0.46
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.14
97 TestFunctional/parallel/ServiceCmdConnect 8.57
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 33.82
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 1.81
103 TestFunctional/parallel/MySQL 23.99
104 TestFunctional/parallel/FileSync 0.35
105 TestFunctional/parallel/CertSync 1.85
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
113 TestFunctional/parallel/License 0.5
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.23
119 TestFunctional/parallel/Version/short 0.07
120 TestFunctional/parallel/Version/components 0.57
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
125 TestFunctional/parallel/ImageCommands/ImageBuild 4.51
126 TestFunctional/parallel/ImageCommands/Setup 1.96
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.18
138 TestFunctional/parallel/ServiceCmd/DeployApp 15.17
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.06
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
145 TestFunctional/parallel/ProfileCmd/profile_list 0.6
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
147 TestFunctional/parallel/MountCmd/any-port 9.2
148 TestFunctional/parallel/ServiceCmd/List 0.96
149 TestFunctional/parallel/MountCmd/specific-port 1.78
150 TestFunctional/parallel/ServiceCmd/JSONOutput 0.94
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.6
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
153 TestFunctional/parallel/ServiceCmd/Format 0.56
154 TestFunctional/parallel/ServiceCmd/URL 0.57
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 127
163 TestMultiControlPlane/serial/DeployApp 5.92
164 TestMultiControlPlane/serial/PingHostFromPods 1.21
165 TestMultiControlPlane/serial/AddWorkerNode 27.28
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
168 TestMultiControlPlane/serial/CopyFile 17.77
169 TestMultiControlPlane/serial/StopSecondaryNode 12.71
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.64
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.92
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 95.84
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.46
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
176 TestMultiControlPlane/serial/StopCluster 36.14
177 TestMultiControlPlane/serial/RestartCluster 50.67
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
179 TestMultiControlPlane/serial/AddSecondaryNode 46.9
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
185 TestJSONOutput/start/Command 40.81
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.68
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.61
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.83
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 38.48
211 TestKicCustomNetwork/use_default_bridge_network 25.66
212 TestKicExistingNetwork 23.09
213 TestKicCustomSubnet 27.52
214 TestKicStaticIP 27.72
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 46.42
219 TestMountStart/serial/StartWithMountFirst 7.69
220 TestMountStart/serial/VerifyMountFirst 0.29
221 TestMountStart/serial/StartWithMountSecond 7.47
222 TestMountStart/serial/VerifyMountSecond 0.29
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 7.93
227 TestMountStart/serial/VerifyMountPostStop 0.29
230 TestMultiNode/serial/FreshStart2Nodes 62.98
231 TestMultiNode/serial/DeployApp2Nodes 5.31
232 TestMultiNode/serial/PingHostFrom2Pods 0.85
233 TestMultiNode/serial/AddNode 26.83
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.67
236 TestMultiNode/serial/CopyFile 10.11
237 TestMultiNode/serial/StopNode 2.29
238 TestMultiNode/serial/StartAfterStop 6.97
239 TestMultiNode/serial/RestartKeepsNodes 78.74
240 TestMultiNode/serial/DeleteNode 5.33
241 TestMultiNode/serial/StopMultiNode 24.04
242 TestMultiNode/serial/RestartMultiNode 47.64
243 TestMultiNode/serial/ValidateNameConflict 26.24
248 TestPreload 110.85
250 TestScheduledStopUnix 96.92
253 TestInsufficientStorage 9.3
254 TestRunningBinaryUpgrade 52.3
256 TestKubernetesUpgrade 324.86
257 TestMissingContainerUpgrade 118.51
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 38.17
261 TestNoKubernetes/serial/StartWithStopK8s 10.34
262 TestNoKubernetes/serial/Start 7.24
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
265 TestNoKubernetes/serial/ProfileList 6.54
266 TestStoppedBinaryUpgrade/Setup 3.95
267 TestStoppedBinaryUpgrade/Upgrade 47.69
268 TestNoKubernetes/serial/Stop 2.38
269 TestNoKubernetes/serial/StartNoArgs 7.29
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
280 TestPause/serial/Start 42.04
288 TestNetworkPlugins/group/false 4.17
292 TestPause/serial/SecondStartNoReconfiguration 6.16
293 TestPause/serial/Pause 0.75
294 TestPause/serial/VerifyStatus 0.35
295 TestPause/serial/Unpause 0.69
296 TestPause/serial/PauseAgain 0.74
297 TestPause/serial/DeletePaused 3.68
298 TestPause/serial/VerifyDeletedResources 15.76
300 TestStartStop/group/old-k8s-version/serial/FirstStart 47.77
302 TestStartStop/group/no-preload/serial/FirstStart 54.21
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.88
306 TestStartStop/group/old-k8s-version/serial/Stop 12.02
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.83
308 TestStartStop/group/no-preload/serial/Stop 12.01
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
310 TestStartStop/group/old-k8s-version/serial/SecondStart 45.34
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
312 TestStartStop/group/no-preload/serial/SecondStart 44.58
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
317 TestStartStop/group/old-k8s-version/serial/Pause 2.72
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
320 TestStartStop/group/embed-certs/serial/FirstStart 41.41
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
322 TestStartStop/group/no-preload/serial/Pause 3.22
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.42
326 TestStartStop/group/newest-cni/serial/FirstStart 32.36
328 TestNetworkPlugins/group/auto/Start 40.83
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
332 TestStartStop/group/newest-cni/serial/Stop 1.34
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
335 TestStartStop/group/newest-cni/serial/SecondStart 11.18
336 TestStartStop/group/embed-certs/serial/Stop 12.07
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
340 TestStartStop/group/newest-cni/serial/Pause 3.23
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
342 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
343 TestStartStop/group/embed-certs/serial/SecondStart 44.64
344 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.13
345 TestNetworkPlugins/group/kindnet/Start 43.29
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
347 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.02
348 TestNetworkPlugins/group/auto/KubeletFlags 0.34
349 TestNetworkPlugins/group/auto/NetCatPod 8.4
350 TestNetworkPlugins/group/auto/DNS 0.16
351 TestNetworkPlugins/group/auto/Localhost 0.13
352 TestNetworkPlugins/group/auto/HairPin 0.11
353 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestNetworkPlugins/group/calico/Start 52.61
356 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
358 TestNetworkPlugins/group/kindnet/NetCatPod 8.53
359 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
360 TestStartStop/group/embed-certs/serial/Pause 3.14
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
362 TestNetworkPlugins/group/custom-flannel/Start 55.75
363 TestNetworkPlugins/group/kindnet/DNS 0.16
364 TestNetworkPlugins/group/kindnet/Localhost 0.13
365 TestNetworkPlugins/group/kindnet/HairPin 0.12
366 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
367 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
368 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.75
369 TestNetworkPlugins/group/flannel/Start 54.24
370 TestNetworkPlugins/group/bridge/Start 72.31
371 TestNetworkPlugins/group/calico/ControllerPod 6.01
372 TestNetworkPlugins/group/calico/KubeletFlags 0.32
373 TestNetworkPlugins/group/calico/NetCatPod 9.21
374 TestNetworkPlugins/group/calico/DNS 0.14
375 TestNetworkPlugins/group/calico/Localhost 0.11
376 TestNetworkPlugins/group/calico/HairPin 0.13
377 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
378 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.21
379 TestNetworkPlugins/group/custom-flannel/DNS 0.13
380 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
381 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/enable-default-cni/Start 63.77
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
385 TestNetworkPlugins/group/flannel/NetCatPod 10.41
386 TestNetworkPlugins/group/flannel/DNS 0.13
387 TestNetworkPlugins/group/flannel/Localhost 0.11
388 TestNetworkPlugins/group/flannel/HairPin 0.11
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
390 TestNetworkPlugins/group/bridge/NetCatPod 9.2
391 TestNetworkPlugins/group/bridge/DNS 0.26
392 TestNetworkPlugins/group/bridge/Localhost 0.11
393 TestNetworkPlugins/group/bridge/HairPin 0.11
394 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
395 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.18
396 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
397 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
398 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
x
+
TestDownloadOnly/v1.28.0/json-events (17.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-352785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-352785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (17.67071562s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (17.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1129 08:29:08.108217  259483 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1129 08:29:08.108349  259483 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-352785
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-352785: exit status 85 (75.383438ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-352785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-352785 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 08:28:50
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 08:28:50.493158  259495 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:28:50.493462  259495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:50.493472  259495 out.go:374] Setting ErrFile to fd 2...
	I1129 08:28:50.493476  259495 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:50.493698  259495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	W1129 08:28:50.493890  259495 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22000-255825/.minikube/config/config.json: open /home/jenkins/minikube-integration/22000-255825/.minikube/config/config.json: no such file or directory
	I1129 08:28:50.494503  259495 out.go:368] Setting JSON to true
	I1129 08:28:50.496496  259495 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4274,"bootTime":1764400656,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:28:50.496707  259495 start.go:143] virtualization: kvm guest
	I1129 08:28:50.499879  259495 out.go:99] [download-only-352785] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1129 08:28:50.500011  259495 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball: no such file or directory
	I1129 08:28:50.500078  259495 notify.go:221] Checking for updates...
	I1129 08:28:50.501335  259495 out.go:171] MINIKUBE_LOCATION=22000
	I1129 08:28:50.502747  259495 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:28:50.504114  259495 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 08:28:50.505318  259495 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 08:28:50.506421  259495 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1129 08:28:50.508494  259495 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1129 08:28:50.508817  259495 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:28:50.534242  259495 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 08:28:50.534373  259495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:50.905371  259495 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-29 08:28:50.893601509 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:28:50.905493  259495 docker.go:319] overlay module found
	I1129 08:28:50.907304  259495 out.go:99] Using the docker driver based on user configuration
	I1129 08:28:50.907332  259495 start.go:309] selected driver: docker
	I1129 08:28:50.907339  259495 start.go:927] validating driver "docker" against <nil>
	I1129 08:28:50.907463  259495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:28:50.971121  259495 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-29 08:28:50.961182873 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:28:50.971362  259495 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 08:28:50.972082  259495 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1129 08:28:50.972264  259495 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1129 08:28:50.974142  259495 out.go:171] Using Docker driver with root privileges
	I1129 08:28:50.975187  259495 cni.go:84] Creating CNI manager for ""
	I1129 08:28:50.975261  259495 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 08:28:50.975275  259495 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 08:28:50.975360  259495 start.go:353] cluster config:
	{Name:download-only-352785 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-352785 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:28:50.976696  259495 out.go:99] Starting "download-only-352785" primary control-plane node in "download-only-352785" cluster
	I1129 08:28:50.976713  259495 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 08:28:50.977789  259495 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1129 08:28:50.977825  259495 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 08:28:50.977954  259495 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 08:28:50.995897  259495 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1129 08:28:50.996114  259495 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1129 08:28:50.996201  259495 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1129 08:28:51.092797  259495 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1129 08:28:51.092866  259495 cache.go:65] Caching tarball of preloaded images
	I1129 08:28:51.093788  259495 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 08:28:51.095595  259495 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1129 08:28:51.095619  259495 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1129 08:28:51.207558  259495 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1129 08:28:51.207685  259495 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1129 08:29:04.512803  259495 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1129 08:29:04.513181  259495 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/download-only-352785/config.json ...
	I1129 08:29:04.513216  259495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/download-only-352785/config.json: {Name:mkcebd305923da37606ad1a4cf63de2d4935e1d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:04.514076  259495 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1129 08:29:04.514304  259495 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-352785 host does not exist
	  To start a cluster, run: "minikube start -p download-only-352785"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-352785
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (13.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-147488 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-147488 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.107595063s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (13.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1129 08:29:21.682166  259483 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1129 08:29:21.682211  259483 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-147488
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-147488: exit status 85 (78.265725ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-352785 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-352785 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 29 Nov 25 08:29 UTC │ 29 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-352785                                                                                                                                                               │ download-only-352785 │ jenkins │ v1.37.0 │ 29 Nov 25 08:29 UTC │ 29 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-147488 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-147488 │ jenkins │ v1.37.0 │ 29 Nov 25 08:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 08:29:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 08:29:08.629094  259894 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:29:08.629367  259894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:29:08.629378  259894 out.go:374] Setting ErrFile to fd 2...
	I1129 08:29:08.629382  259894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:29:08.629624  259894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 08:29:08.630129  259894 out.go:368] Setting JSON to true
	I1129 08:29:08.631089  259894 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4293,"bootTime":1764400656,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:29:08.631162  259894 start.go:143] virtualization: kvm guest
	I1129 08:29:08.633012  259894 out.go:99] [download-only-147488] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 08:29:08.633197  259894 notify.go:221] Checking for updates...
	I1129 08:29:08.634478  259894 out.go:171] MINIKUBE_LOCATION=22000
	I1129 08:29:08.635649  259894 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:29:08.637090  259894 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 08:29:08.638257  259894 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 08:29:08.639465  259894 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1129 08:29:08.641599  259894 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1129 08:29:08.641878  259894 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:29:08.665314  259894 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 08:29:08.665406  259894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:29:08.723183  259894 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-29 08:29:08.713505575 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:29:08.723309  259894 docker.go:319] overlay module found
	I1129 08:29:08.724986  259894 out.go:99] Using the docker driver based on user configuration
	I1129 08:29:08.725017  259894 start.go:309] selected driver: docker
	I1129 08:29:08.725026  259894 start.go:927] validating driver "docker" against <nil>
	I1129 08:29:08.725107  259894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:29:08.785376  259894 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-29 08:29:08.776074579 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:29:08.785538  259894 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 08:29:08.786046  259894 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1129 08:29:08.786243  259894 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1129 08:29:08.787920  259894 out.go:171] Using Docker driver with root privileges
	I1129 08:29:08.788922  259894 cni.go:84] Creating CNI manager for ""
	I1129 08:29:08.788982  259894 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1129 08:29:08.788995  259894 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1129 08:29:08.789056  259894 start.go:353] cluster config:
	{Name:download-only-147488 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-147488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:29:08.790381  259894 out.go:99] Starting "download-only-147488" primary control-plane node in "download-only-147488" cluster
	I1129 08:29:08.790396  259894 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1129 08:29:08.791477  259894 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1129 08:29:08.791510  259894 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 08:29:08.791610  259894 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1129 08:29:08.808910  259894 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1129 08:29:08.809072  259894 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1129 08:29:08.809101  259894 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1129 08:29:08.809106  259894 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1129 08:29:08.809117  259894 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1129 08:29:09.202640  259894 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1129 08:29:09.202696  259894 cache.go:65] Caching tarball of preloaded images
	I1129 08:29:09.203497  259894 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1129 08:29:09.205032  259894 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1129 08:29:09.205057  259894 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1129 08:29:09.317346  259894 preload.go:295] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1129 08:29:09.317396  259894 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/22000-255825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-147488 host does not exist
	  To start a cluster, run: "minikube start -p download-only-147488"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-147488
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-884292 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-884292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-884292
--- PASS: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestBinaryMirror (0.87s)

                                                
                                                
=== RUN   TestBinaryMirror
I1129 08:29:22.911380  259483 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-085066 --alsologtostderr --binary-mirror http://127.0.0.1:35665 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-085066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-085066
--- PASS: TestBinaryMirror (0.87s)

                                                
                                    
x
+
TestOffline (57.53s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-760624 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-760624 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (54.917155194s)
helpers_test.go:175: Cleaning up "offline-containerd-760624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-760624
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-760624: (2.609011479s)
--- PASS: TestOffline (57.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-509184
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-509184: exit status 85 (71.409605ms)

                                                
                                                
-- stdout --
	* Profile "addons-509184" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-509184"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-509184
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-509184: exit status 85 (71.333312ms)

                                                
                                                
-- stdout --
	* Profile "addons-509184" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-509184"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (129.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-509184 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-509184 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.62162642s)
--- PASS: TestAddons/Setup (129.62s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.14s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 18.155625ms
addons_test.go:868: volcano-scheduler stabilized in 18.255771ms
addons_test.go:876: volcano-admission stabilized in 18.328502ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-fhhcf" [c0fe170a-8660-42df-a5b6-c62e4039ac16] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004375994s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-fpcm8" [e452b320-4a0c-4352-a857-239df536a297] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003630759s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-4tl46" [e3f3e77e-5679-42d5-bbfa-512976e18143] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004478907s
addons_test.go:903: (dbg) Run:  kubectl --context addons-509184 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-509184 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-509184 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [a5ed272a-8705-4851-b42f-d9e3bb2dec1f] Pending
helpers_test.go:352: "test-job-nginx-0" [a5ed272a-8705-4851-b42f-d9e3bb2dec1f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [a5ed272a-8705-4851-b42f-d9e3bb2dec1f] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004285129s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-509184 addons disable volcano --alsologtostderr -v=1: (11.731055644s)
--- PASS: TestAddons/serial/Volcano (40.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-509184 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-509184 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-509184 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-509184 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a2be1809-9bf6-4c82-90ea-c7ec7dc33b94] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a2be1809-9bf6-4c82-90ea-c7ec7dc33b94] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003116237s
addons_test.go:694: (dbg) Run:  kubectl --context addons-509184 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-509184 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-509184 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.633293ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-kxjgp" [fc651ced-d1cf-4a4a-a5e7-7e01b4ee0484] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004177669s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2gww6" [dcc712f8-8d7f-437e-9900-825404b387f7] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004243378s
addons_test.go:392: (dbg) Run:  kubectl --context addons-509184 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-509184 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-509184 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.57164556s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 ip
2025/11/29 08:32:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.43s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.50124ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-509184
addons_test.go:332: (dbg) Run:  kubectl --context addons-509184 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.11s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-509184 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-509184 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-509184 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [d6ca748c-c976-4bff-98a6-44f3fc900115] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [d6ca748c-c976-4bff-98a6-44f3fc900115] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003808876s
I1129 08:33:00.075813  259483 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-509184 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-509184 addons disable ingress --alsologtostderr -v=1: (7.884539658s)
--- PASS: TestAddons/parallel/Ingress (21.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-lj7lq" [ccdf090d-4f4d-406b-b18a-d15b85818393] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004520584s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-509184 addons disable inspektor-gadget --alsologtostderr -v=1: (5.730277845s)
--- PASS: TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.739924ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-c2wft" [556addc9-95c2-41a1-9f09-8b940f952ba7] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003607959s
addons_test.go:463: (dbg) Run:  kubectl --context addons-509184 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.73s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1129 08:32:38.786466  259483 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1129 08:32:38.791569  259483 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1129 08:32:38.791607  259483 kapi.go:107] duration metric: took 5.190675ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.203456ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-509184 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-509184 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b1fd2df4-694d-4fe7-994a-7af406c50490] Pending
helpers_test.go:352: "task-pv-pod" [b1fd2df4-694d-4fe7-994a-7af406c50490] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b1fd2df4-694d-4fe7-994a-7af406c50490] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.00400927s
addons_test.go:572: (dbg) Run:  kubectl --context addons-509184 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-509184 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-509184 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-509184 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-509184 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-509184 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-509184 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d272e95d-0531-4bf0-97f2-e26004a735fe] Pending
helpers_test.go:352: "task-pv-pod-restore" [d272e95d-0531-4bf0-97f2-e26004a735fe] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d272e95d-0531-4bf0-97f2-e26004a735fe] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004089658s
addons_test.go:614: (dbg) Run:  kubectl --context addons-509184 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-509184 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-509184 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-509184 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.516312859s)
--- PASS: TestAddons/parallel/CSI (52.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-509184 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-slgkw" [043dd26e-c721-43db-aa73-61aad7e48125] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-slgkw" [043dd26e-c721-43db-aa73-61aad7e48125] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00430189s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-509184 addons disable headlamp --alsologtostderr -v=1: (5.771884241s)
--- PASS: TestAddons/parallel/Headlamp (17.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-77nc2" [0158bbb4-ec88-42ee-8a88-b2b3fff332dc] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004064519s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.73s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-509184 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-509184 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-509184 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [ff1b0502-369c-4a16-835e-c672b61aa1ed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [ff1b0502-369c-4a16-835e-c672b61aa1ed] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [ff1b0502-369c-4a16-835e-c672b61aa1ed] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003739591s
addons_test.go:967: (dbg) Run:  kubectl --context addons-509184 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 ssh "cat /opt/local-path-provisioner/pvc-37e3042d-c13b-4548-9236-9378f150bdf4_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-509184 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-509184 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-509184 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.754406257s)
--- PASS: TestAddons/parallel/LocalPath (55.73s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-vnhmp" [7b906128-34a1-4d85-953e-0faf46ca85ab] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003349911s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-z6zj2" [d20b5e8b-421b-473d-8e65-d38552de5f23] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004030205s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-509184 addons disable yakd --alsologtostderr -v=1: (5.754378885s)
--- PASS: TestAddons/parallel/Yakd (10.76s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-6jxff" [f5d8be04-56a1-4067-99d3-72a5513c2c11] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003852473s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-509184 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.63s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-509184
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-509184: (12.308637067s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-509184
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-509184
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-509184
--- PASS: TestAddons/StoppedEnableDisable (12.60s)

                                                
                                    
x
+
TestCertOptions (23.56s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-536258 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-536258 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (20.405089203s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-536258 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-536258 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-536258 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-536258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-536258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-536258: (2.446353187s)
--- PASS: TestCertOptions (23.56s)

                                                
                                    
x
+
TestCertExpiration (214.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-368536 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-368536 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (26.018907214s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-368536 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-368536 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.755484902s)
helpers_test.go:175: Cleaning up "cert-expiration-368536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-368536
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-368536: (2.522057783s)
--- PASS: TestCertExpiration (214.30s)

                                                
                                    
x
+
TestForceSystemdFlag (42.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-773228 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-773228 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.98073135s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-773228 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-773228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-773228
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-773228: (4.06277042s)
--- PASS: TestForceSystemdFlag (42.44s)

                                                
                                    
x
+
TestForceSystemdEnv (27.55s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-693869 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-693869 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (24.776951681s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-693869 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-693869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-693869
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-693869: (2.458078516s)
--- PASS: TestForceSystemdEnv (27.55s)

                                                
                                    
x
+
TestDockerEnvContainerd (39.94s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-840482 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-840482 --driver=docker  --container-runtime=containerd: (23.983176717s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-840482"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXYc7Ywj/agent.282994" SSH_AGENT_PID="282995" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXYc7Ywj/agent.282994" SSH_AGENT_PID="282995" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXYc7Ywj/agent.282994" SSH_AGENT_PID="282995" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (2.055574901s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXYc7Ywj/agent.282994" SSH_AGENT_PID="282995" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-840482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-840482
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-840482: (1.963705112s)
--- PASS: TestDockerEnvContainerd (39.94s)

                                                
                                    
x
+
TestErrorSpam/setup (19.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-881723 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-881723 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-881723 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-881723 --driver=docker  --container-runtime=containerd: (19.655923636s)
--- PASS: TestErrorSpam/setup (19.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 status
--- PASS: TestErrorSpam/status (0.97s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 unpause
--- PASS: TestErrorSpam/unpause (1.55s)

                                                
                                    
x
+
TestErrorSpam/stop (2.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 stop: (1.872599807s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-881723 --log_dir /tmp/nospam-881723 stop
--- PASS: TestErrorSpam/stop (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22000-255825/.minikube/files/etc/test/nested/copy/259483/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (39.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-036665 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-036665 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (39.569806624s)
--- PASS: TestFunctional/serial/StartWithProxy (39.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1129 08:36:00.491714  259483 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-036665 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-036665 --alsologtostderr -v=8: (5.864842558s)
functional_test.go:678: soft start took 5.865576005s for "functional-036665" cluster.
I1129 08:36:06.356943  259483 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (5.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-036665 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-036665 /tmp/TestFunctionalserialCacheCmdcacheadd_local1155313940/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 cache add minikube-local-cache-test:functional-036665
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-036665 cache add minikube-local-cache-test:functional-036665: (1.689612171s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 cache delete minikube-local-cache-test:functional-036665
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-036665
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036665 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (294.002596ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 kubectl -- --context functional-036665 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-036665 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-036665 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1129 08:36:33.488849  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:33.495266  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:33.506639  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:33.528010  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:33.569386  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:33.650844  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:33.812381  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:34.134047  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:34.775975  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:36.057980  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:38.619516  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:43.741830  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:53.984160  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-036665 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.615783404s)
functional_test.go:776: restart took 41.61596234s for "functional-036665" cluster.
I1129 08:36:54.872023  259483 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (41.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-036665 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-036665 logs: (1.198572445s)
--- PASS: TestFunctional/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 logs --file /tmp/TestFunctionalserialLogsFileCmd83279656/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-036665 logs --file /tmp/TestFunctionalserialLogsFileCmd83279656/001/logs.txt: (1.233813604s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.97s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-036665 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-036665
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-036665: exit status 115 (381.481547ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31363 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-036665 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036665 config get cpus: exit status 14 (116.157341ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036665 config get cpus: exit status 14 (120.287925ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-036665 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-036665 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 305755: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.56s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-036665 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-036665 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (211.301114ms)

                                                
                                                
-- stdout --
	* [functional-036665] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:37:29.958798  304534 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:37:29.959146  304534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:37:29.959196  304534 out.go:374] Setting ErrFile to fd 2...
	I1129 08:37:29.959215  304534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:37:29.959581  304534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 08:37:29.960298  304534 out.go:368] Setting JSON to false
	I1129 08:37:29.961858  304534 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4794,"bootTime":1764400656,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:37:29.961969  304534 start.go:143] virtualization: kvm guest
	I1129 08:37:29.964154  304534 out.go:179] * [functional-036665] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 08:37:29.966169  304534 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 08:37:29.966193  304534 notify.go:221] Checking for updates...
	I1129 08:37:29.968089  304534 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:37:29.972853  304534 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 08:37:29.975385  304534 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 08:37:29.976902  304534 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 08:37:29.978012  304534 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 08:37:29.979394  304534 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:37:29.980272  304534 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:37:30.009993  304534 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 08:37:30.010157  304534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:37:30.081322  304534 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-29 08:37:30.071276806 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:37:30.081464  304534 docker.go:319] overlay module found
	I1129 08:37:30.083249  304534 out.go:179] * Using the docker driver based on existing profile
	I1129 08:37:30.084233  304534 start.go:309] selected driver: docker
	I1129 08:37:30.084255  304534 start.go:927] validating driver "docker" against &{Name:functional-036665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-036665 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:37:30.084372  304534 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 08:37:30.085850  304534 out.go:203] 
	W1129 08:37:30.086855  304534 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1129 08:37:30.087926  304534 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-036665 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-036665 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-036665 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (205.298429ms)

                                                
                                                
-- stdout --
	* [functional-036665] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:37:30.417021  304964 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:37:30.417237  304964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:37:30.417248  304964 out.go:374] Setting ErrFile to fd 2...
	I1129 08:37:30.417254  304964 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:37:30.417692  304964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 08:37:30.418312  304964 out.go:368] Setting JSON to false
	I1129 08:37:30.419609  304964 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4794,"bootTime":1764400656,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:37:30.419671  304964 start.go:143] virtualization: kvm guest
	I1129 08:37:30.424564  304964 out.go:179] * [functional-036665] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1129 08:37:30.426123  304964 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 08:37:30.426105  304964 notify.go:221] Checking for updates...
	I1129 08:37:30.427320  304964 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:37:30.428821  304964 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 08:37:30.430043  304964 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 08:37:30.431420  304964 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 08:37:30.432681  304964 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 08:37:30.434893  304964 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:37:30.435763  304964 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:37:30.466674  304964 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 08:37:30.466818  304964 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:37:30.533445  304964 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-29 08:37:30.522529931 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:37:30.533580  304964 docker.go:319] overlay module found
	I1129 08:37:30.535301  304964 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1129 08:37:30.536600  304964 start.go:309] selected driver: docker
	I1129 08:37:30.536620  304964 start.go:927] validating driver "docker" against &{Name:functional-036665 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-036665 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:37:30.536783  304964 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 08:37:30.538697  304964 out.go:203] 
	W1129 08:37:30.540077  304964 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1129 08:37:30.541826  304964 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 status -o json
E1129 08:37:14.465989  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-036665 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-036665 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-hknkp" [4b5ee801-b8f7-4236-86e3-49c403ff5ccf] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-hknkp" [4b5ee801-b8f7-4236-86e3-49c403ff5ccf] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003631333s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31449
functional_test.go:1680: http://192.168.49.2:31449: success! body:
Request served by hello-node-connect-7d85dfc575-hknkp

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31449
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [687c0de3-fac1-4044-8c05-3b22fc08cb3a] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003912184s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-036665 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-036665 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-036665 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-036665 apply -f testdata/storage-provisioner/pod.yaml
I1129 08:37:07.328293  259483 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0cf39b1e-eeb4-4d49-aff0-0252d649a12f] Pending
helpers_test.go:352: "sp-pod" [0cf39b1e-eeb4-4d49-aff0-0252d649a12f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0cf39b1e-eeb4-4d49-aff0-0252d649a12f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.004065153s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-036665 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-036665 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-036665 delete -f testdata/storage-provisioner/pod.yaml: (1.003632645s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-036665 apply -f testdata/storage-provisioner/pod.yaml
I1129 08:37:28.597622  259483 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e0e574f9-4bde-4a89-a985-6948b2e58d2c] Pending
helpers_test.go:352: "sp-pod" [e0e574f9-4bde-4a89-a985-6948b2e58d2c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e0e574f9-4bde-4a89-a985-6948b2e58d2c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006564685s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-036665 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh -n functional-036665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 cp functional-036665:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1807180743/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh -n functional-036665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh -n functional-036665 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-036665 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-qb7t2" [fcc83c53-42b2-4cc2-8b3d-a1cb52d1891f] Pending
helpers_test.go:352: "mysql-5bb876957f-qb7t2" [fcc83c53-42b2-4cc2-8b3d-a1cb52d1891f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-qb7t2" [fcc83c53-42b2-4cc2-8b3d-a1cb52d1891f] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.003680042s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-036665 exec mysql-5bb876957f-qb7t2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-036665 exec mysql-5bb876957f-qb7t2 -- mysql -ppassword -e "show databases;": exit status 1 (123.360969ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1129 08:37:23.540101  259483 retry.go:31] will retry after 834.961026ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-036665 exec mysql-5bb876957f-qb7t2 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-036665 exec mysql-5bb876957f-qb7t2 -- mysql -ppassword -e "show databases;": exit status 1 (119.25407ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1129 08:37:24.495114  259483 retry.go:31] will retry after 1.601939304s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-036665 exec mysql-5bb876957f-qb7t2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/259483/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "sudo cat /etc/test/nested/copy/259483/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/259483.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "sudo cat /etc/ssl/certs/259483.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/259483.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "sudo cat /usr/share/ca-certificates/259483.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2594832.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "sudo cat /etc/ssl/certs/2594832.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2594832.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "sudo cat /usr/share/ca-certificates/2594832.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-036665 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036665 ssh "sudo systemctl is-active docker": exit status 1 (331.100756ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036665 ssh "sudo systemctl is-active crio": exit status 1 (324.466899ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-036665 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-036665 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-036665 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-036665 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 298847: os: process already finished
helpers_test.go:519: unable to terminate pid 298477: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-036665 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-036665 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [b8ac8b42-4010-4959-bbbb-3f184a9e9a28] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [b8ac8b42-4010-4959-bbbb-3f184a9e9a28] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.00418432s
I1129 08:37:13.397051  259483 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-036665 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-036665
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-036665
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-036665 image ls --format short --alsologtostderr:
I1129 08:37:34.232100  307518 out.go:360] Setting OutFile to fd 1 ...
I1129 08:37:34.232428  307518 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:37:34.232436  307518 out.go:374] Setting ErrFile to fd 2...
I1129 08:37:34.232442  307518 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:37:34.232744  307518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
I1129 08:37:34.233541  307518 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:37:34.233694  307518 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:37:34.234247  307518 cli_runner.go:164] Run: docker container inspect functional-036665 --format={{.State.Status}}
I1129 08:37:34.264842  307518 ssh_runner.go:195] Run: systemctl --version
I1129 08:37:34.264911  307518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-036665
I1129 08:37:34.288518  307518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/functional-036665/id_rsa Username:docker}
I1129 08:37:34.393497  307518 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-036665 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-036665  │ sha256:9056ab │ 2.37MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:9056ab │ 2.37MB │
│ docker.io/library/minikube-local-cache-test │ functional-036665  │ sha256:537777 │ 991B   │
│ docker.io/library/nginx                     │ latest             │ sha256:60adc2 │ 59.8MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ docker.io/library/mysql                     │ 5.7                │ sha256:510733 │ 138MB  │
│ docker.io/library/nginx                     │ alpine             │ sha256:d4918c │ 22.6MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-036665 image ls --format table --alsologtostderr:
I1129 08:37:34.779429  307872 out.go:360] Setting OutFile to fd 1 ...
I1129 08:37:34.779537  307872 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:37:34.779545  307872 out.go:374] Setting ErrFile to fd 2...
I1129 08:37:34.779552  307872 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:37:34.779946  307872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
I1129 08:37:34.780763  307872 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:37:34.780939  307872 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:37:34.781715  307872 cli_runner.go:164] Run: docker container inspect functional-036665 --format={{.State.Status}}
I1129 08:37:34.805663  307872 ssh_runner.go:195] Run: systemctl --version
I1129 08:37:34.805756  307872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-036665
I1129 08:37:34.829218  307872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/functional-036665/id_rsa Username:docker}
I1129 08:37:34.939761  307872 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-036665 image ls --format json --alsologtostderr:
[{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:537777c70af92acaea8d93e
fa85b968331ae7819a12ab496813dd762087a4246","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-036665"],"size":"991"},{"id":"sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"22631814"},{"id":"sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"59772801"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDige
sts":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed9
73dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-036665","docker.io/kicbase/echo-server:latest"],"size":"2372971"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ec
e7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-036665 image ls --format json --alsologtostderr:
I1129 08:37:34.508155  307713 out.go:360] Setting OutFile to fd 1 ...
I1129 08:37:34.508394  307713 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:37:34.508402  307713 out.go:374] Setting ErrFile to fd 2...
I1129 08:37:34.508406  307713 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:37:34.508621  307713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
I1129 08:37:34.509153  307713 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:37:34.509252  307713 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:37:34.509818  307713 cli_runner.go:164] Run: docker container inspect functional-036665 --format={{.State.Status}}
I1129 08:37:34.529224  307713 ssh_runner.go:195] Run: systemctl --version
I1129 08:37:34.529290  307713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-036665
I1129 08:37:34.550195  307713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/functional-036665/id_rsa Username:docker}
I1129 08:37:34.662476  307713 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-036665 image ls --format yaml --alsologtostderr:
- id: sha256:537777c70af92acaea8d93efa85b968331ae7819a12ab496813dd762087a4246
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-036665
size: "991"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "22631814"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-036665
- docker.io/kicbase/echo-server:latest
size: "2372971"
- id: sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "59772801"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-036665 image ls --format yaml --alsologtostderr:
I1129 08:37:34.242940  307524 out.go:360] Setting OutFile to fd 1 ...
I1129 08:37:34.243226  307524 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:37:34.243251  307524 out.go:374] Setting ErrFile to fd 2...
I1129 08:37:34.243258  307524 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:37:34.243580  307524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
I1129 08:37:34.244365  307524 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:37:34.244504  307524 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:37:34.245194  307524 cli_runner.go:164] Run: docker container inspect functional-036665 --format={{.State.Status}}
I1129 08:37:34.269276  307524 ssh_runner.go:195] Run: systemctl --version
I1129 08:37:34.269358  307524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-036665
I1129 08:37:34.291399  307524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/functional-036665/id_rsa Username:docker}
I1129 08:37:34.395117  307524 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036665 ssh pgrep buildkitd: exit status 1 (311.13254ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image build -t localhost/my-image:functional-036665 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-036665 image build -t localhost/my-image:functional-036665 testdata/build --alsologtostderr: (3.949864771s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-036665 image build -t localhost/my-image:functional-036665 testdata/build --alsologtostderr:
I1129 08:37:34.828058  307883 out.go:360] Setting OutFile to fd 1 ...
I1129 08:37:34.828399  307883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:37:34.828411  307883 out.go:374] Setting ErrFile to fd 2...
I1129 08:37:34.828414  307883 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:37:34.828644  307883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
I1129 08:37:34.829545  307883 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:37:34.830469  307883 config.go:182] Loaded profile config "functional-036665": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1129 08:37:34.831125  307883 cli_runner.go:164] Run: docker container inspect functional-036665 --format={{.State.Status}}
I1129 08:37:34.853402  307883 ssh_runner.go:195] Run: systemctl --version
I1129 08:37:34.853490  307883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-036665
I1129 08:37:34.875382  307883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/functional-036665/id_rsa Username:docker}
I1129 08:37:34.985894  307883 build_images.go:162] Building image from path: /tmp/build.3231631403.tar
I1129 08:37:34.985982  307883 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1129 08:37:34.995844  307883 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3231631403.tar
I1129 08:37:35.000099  307883 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3231631403.tar: stat -c "%s %y" /var/lib/minikube/build/build.3231631403.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3231631403.tar': No such file or directory
I1129 08:37:35.000133  307883 ssh_runner.go:362] scp /tmp/build.3231631403.tar --> /var/lib/minikube/build/build.3231631403.tar (3072 bytes)
I1129 08:37:35.021590  307883 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3231631403
I1129 08:37:35.031599  307883 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3231631403 -xf /var/lib/minikube/build/build.3231631403.tar
I1129 08:37:35.040212  307883 containerd.go:394] Building image: /var/lib/minikube/build/build.3231631403
I1129 08:37:35.040309  307883 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3231631403 --local dockerfile=/var/lib/minikube/build/build.3231631403 --output type=image,name=localhost/my-image:functional-036665
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:31196d6293fbe494dce3ca0f787f2c24c3cc1368c70d5b55e7199f4e0e6e18ed done
#8 exporting config sha256:6390ba3abb505fa4a3f6ce612cbf8d3521156f3789beff0ed914366b0d7e61e8 done
#8 naming to localhost/my-image:functional-036665 done
#8 DONE 0.1s
I1129 08:37:38.676911  307883 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3231631403 --local dockerfile=/var/lib/minikube/build/build.3231631403 --output type=image,name=localhost/my-image:functional-036665: (3.636568547s)
I1129 08:37:38.676999  307883 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3231631403
I1129 08:37:38.686474  307883 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3231631403.tar
I1129 08:37:38.694591  307883 build_images.go:218] Built localhost/my-image:functional-036665 from /tmp/build.3231631403.tar
I1129 08:37:38.694623  307883 build_images.go:134] succeeded building to: functional-036665
I1129 08:37:38.694628  307883 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image ls
2025/11/29 08:37:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.934168739s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-036665
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image load --daemon kicbase/echo-server:functional-036665 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-036665 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.233.173 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-036665 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image load --daemon kicbase/echo-server:functional-036665 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (15.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-036665 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-036665 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-4gbbd" [8411f537-562e-4d97-8995-abc6341c2458] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-4gbbd" [8411f537-562e-4d97-8995-abc6341c2458] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 15.00414322s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (15.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-036665
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image load --daemon kicbase/echo-server:functional-036665 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image save kicbase/echo-server:functional-036665 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image rm kicbase/echo-server:functional-036665 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-036665
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 image save --daemon kicbase/echo-server:functional-036665 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-036665
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "417.311662ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "179.497945ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "388.104886ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "83.504994ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-036665 /tmp/TestFunctionalparallelMountCmdany-port1881018448/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764405441076087200" to /tmp/TestFunctionalparallelMountCmdany-port1881018448/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764405441076087200" to /tmp/TestFunctionalparallelMountCmdany-port1881018448/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764405441076087200" to /tmp/TestFunctionalparallelMountCmdany-port1881018448/001/test-1764405441076087200
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036665 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.138662ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 08:37:21.424575  259483 retry.go:31] will retry after 594.942665ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 29 08:37 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 29 08:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 29 08:37 test-1764405441076087200
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh cat /mount-9p/test-1764405441076087200
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-036665 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [3d8a2e93-0d92-45b0-8d14-588c074a1f01] Pending
helpers_test.go:352: "busybox-mount" [3d8a2e93-0d92-45b0-8d14-588c074a1f01] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [3d8a2e93-0d92-45b0-8d14-588c074a1f01] Running
helpers_test.go:352: "busybox-mount" [3d8a2e93-0d92-45b0-8d14-588c074a1f01] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [3d8a2e93-0d92-45b0-8d14-588c074a1f01] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004569622s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-036665 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-036665 /tmp/TestFunctionalparallelMountCmdany-port1881018448/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-036665 /tmp/TestFunctionalparallelMountCmdspecific-port1519856729/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036665 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (341.014466ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 08:37:30.618377  259483 retry.go:31] will retry after 286.189525ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-036665 /tmp/TestFunctionalparallelMountCmdspecific-port1519856729/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036665 ssh "sudo umount -f /mount-9p": exit status 1 (321.219682ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-036665 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-036665 /tmp/TestFunctionalparallelMountCmdspecific-port1519856729/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 service list -o json
functional_test.go:1504: Took "940.092815ms" to run "out/minikube-linux-amd64 -p functional-036665 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32444
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-036665 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2090589759/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-036665 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2090589759/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-036665 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2090589759/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036665 ssh "findmnt -T" /mount1: exit status 1 (375.132423ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 08:37:32.433566  259483 retry.go:31] will retry after 273.940714ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-036665 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-036665 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2090589759/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-036665 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2090589759/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-036665 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2090589759/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-036665 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32444
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-036665
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-036665
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-036665
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (127s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1129 08:37:55.428804  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:39:17.351128  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-721104 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m6.247860766s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (127.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-721104 kubectl -- rollout status deployment/busybox: (3.74862212s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-gl69g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-hmcsg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-tbdh7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-gl69g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-hmcsg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-tbdh7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-gl69g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-hmcsg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-tbdh7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-gl69g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-gl69g -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-hmcsg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-hmcsg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-tbdh7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 kubectl -- exec busybox-7b57f96db7-tbdh7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-721104 node add --alsologtostderr -v 5: (26.367964652s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-721104 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp testdata/cp-test.txt ha-721104:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2093689227/001/cp-test_ha-721104.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104:/home/docker/cp-test.txt ha-721104-m02:/home/docker/cp-test_ha-721104_ha-721104-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m02 "sudo cat /home/docker/cp-test_ha-721104_ha-721104-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104:/home/docker/cp-test.txt ha-721104-m03:/home/docker/cp-test_ha-721104_ha-721104-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m03 "sudo cat /home/docker/cp-test_ha-721104_ha-721104-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104:/home/docker/cp-test.txt ha-721104-m04:/home/docker/cp-test_ha-721104_ha-721104-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m04 "sudo cat /home/docker/cp-test_ha-721104_ha-721104-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp testdata/cp-test.txt ha-721104-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2093689227/001/cp-test_ha-721104-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104-m02:/home/docker/cp-test.txt ha-721104:/home/docker/cp-test_ha-721104-m02_ha-721104.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104 "sudo cat /home/docker/cp-test_ha-721104-m02_ha-721104.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104-m02:/home/docker/cp-test.txt ha-721104-m03:/home/docker/cp-test_ha-721104-m02_ha-721104-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m03 "sudo cat /home/docker/cp-test_ha-721104-m02_ha-721104-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104-m02:/home/docker/cp-test.txt ha-721104-m04:/home/docker/cp-test_ha-721104-m02_ha-721104-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m04 "sudo cat /home/docker/cp-test_ha-721104-m02_ha-721104-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp testdata/cp-test.txt ha-721104-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2093689227/001/cp-test_ha-721104-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104-m03:/home/docker/cp-test.txt ha-721104:/home/docker/cp-test_ha-721104-m03_ha-721104.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104 "sudo cat /home/docker/cp-test_ha-721104-m03_ha-721104.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104-m03:/home/docker/cp-test.txt ha-721104-m02:/home/docker/cp-test_ha-721104-m03_ha-721104-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m02 "sudo cat /home/docker/cp-test_ha-721104-m03_ha-721104-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104-m03:/home/docker/cp-test.txt ha-721104-m04:/home/docker/cp-test_ha-721104-m03_ha-721104-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m04 "sudo cat /home/docker/cp-test_ha-721104-m03_ha-721104-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp testdata/cp-test.txt ha-721104-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2093689227/001/cp-test_ha-721104-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104-m04:/home/docker/cp-test.txt ha-721104:/home/docker/cp-test_ha-721104-m04_ha-721104.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104 "sudo cat /home/docker/cp-test_ha-721104-m04_ha-721104.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104-m04:/home/docker/cp-test.txt ha-721104-m02:/home/docker/cp-test_ha-721104-m04_ha-721104-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m02 "sudo cat /home/docker/cp-test_ha-721104-m04_ha-721104-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 cp ha-721104-m04:/home/docker/cp-test.txt ha-721104-m03:/home/docker/cp-test_ha-721104-m04_ha-721104-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 ssh -n ha-721104-m03 "sudo cat /home/docker/cp-test_ha-721104-m04_ha-721104-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-721104 node stop m02 --alsologtostderr -v 5: (12.003584569s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-721104 status --alsologtostderr -v 5: exit status 7 (704.243106ms)

                                                
                                                
-- stdout --
	ha-721104
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-721104-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-721104-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-721104-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:40:54.842850  328955 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:40:54.842984  328955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:40:54.842996  328955 out.go:374] Setting ErrFile to fd 2...
	I1129 08:40:54.843002  328955 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:40:54.843242  328955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 08:40:54.843443  328955 out.go:368] Setting JSON to false
	I1129 08:40:54.843482  328955 mustload.go:66] Loading cluster: ha-721104
	I1129 08:40:54.843570  328955 notify.go:221] Checking for updates...
	I1129 08:40:54.843898  328955 config.go:182] Loaded profile config "ha-721104": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:40:54.843918  328955 status.go:174] checking status of ha-721104 ...
	I1129 08:40:54.844395  328955 cli_runner.go:164] Run: docker container inspect ha-721104 --format={{.State.Status}}
	I1129 08:40:54.863886  328955 status.go:371] ha-721104 host status = "Running" (err=<nil>)
	I1129 08:40:54.863911  328955 host.go:66] Checking if "ha-721104" exists ...
	I1129 08:40:54.864210  328955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-721104
	I1129 08:40:54.882785  328955 host.go:66] Checking if "ha-721104" exists ...
	I1129 08:40:54.883080  328955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:40:54.883135  328955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-721104
	I1129 08:40:54.900865  328955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/ha-721104/id_rsa Username:docker}
	I1129 08:40:55.001210  328955 ssh_runner.go:195] Run: systemctl --version
	I1129 08:40:55.007952  328955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:40:55.020555  328955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:40:55.078970  328955 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-11-29 08:40:55.068920035 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:40:55.079522  328955 kubeconfig.go:125] found "ha-721104" server: "https://192.168.49.254:8443"
	I1129 08:40:55.079552  328955 api_server.go:166] Checking apiserver status ...
	I1129 08:40:55.079586  328955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:40:55.091910  328955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1364/cgroup
	W1129 08:40:55.100274  328955 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1364/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1129 08:40:55.100329  328955 ssh_runner.go:195] Run: ls
	I1129 08:40:55.104058  328955 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1129 08:40:55.108143  328955 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1129 08:40:55.108169  328955 status.go:463] ha-721104 apiserver status = Running (err=<nil>)
	I1129 08:40:55.108182  328955 status.go:176] ha-721104 status: &{Name:ha-721104 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:40:55.108203  328955 status.go:174] checking status of ha-721104-m02 ...
	I1129 08:40:55.108522  328955 cli_runner.go:164] Run: docker container inspect ha-721104-m02 --format={{.State.Status}}
	I1129 08:40:55.126797  328955 status.go:371] ha-721104-m02 host status = "Stopped" (err=<nil>)
	I1129 08:40:55.126815  328955 status.go:384] host is not running, skipping remaining checks
	I1129 08:40:55.126821  328955 status.go:176] ha-721104-m02 status: &{Name:ha-721104-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:40:55.126843  328955 status.go:174] checking status of ha-721104-m03 ...
	I1129 08:40:55.127133  328955 cli_runner.go:164] Run: docker container inspect ha-721104-m03 --format={{.State.Status}}
	I1129 08:40:55.144825  328955 status.go:371] ha-721104-m03 host status = "Running" (err=<nil>)
	I1129 08:40:55.144845  328955 host.go:66] Checking if "ha-721104-m03" exists ...
	I1129 08:40:55.145109  328955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-721104-m03
	I1129 08:40:55.162033  328955 host.go:66] Checking if "ha-721104-m03" exists ...
	I1129 08:40:55.162350  328955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:40:55.162388  328955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-721104-m03
	I1129 08:40:55.178885  328955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/ha-721104-m03/id_rsa Username:docker}
	I1129 08:40:55.277120  328955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:40:55.289771  328955 kubeconfig.go:125] found "ha-721104" server: "https://192.168.49.254:8443"
	I1129 08:40:55.289800  328955 api_server.go:166] Checking apiserver status ...
	I1129 08:40:55.289840  328955 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:40:55.302321  328955 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1275/cgroup
	W1129 08:40:55.310580  328955 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1275/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1129 08:40:55.310633  328955 ssh_runner.go:195] Run: ls
	I1129 08:40:55.314306  328955 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1129 08:40:55.318379  328955 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1129 08:40:55.318401  328955 status.go:463] ha-721104-m03 apiserver status = Running (err=<nil>)
	I1129 08:40:55.318411  328955 status.go:176] ha-721104-m03 status: &{Name:ha-721104-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:40:55.318431  328955 status.go:174] checking status of ha-721104-m04 ...
	I1129 08:40:55.318778  328955 cli_runner.go:164] Run: docker container inspect ha-721104-m04 --format={{.State.Status}}
	I1129 08:40:55.336580  328955 status.go:371] ha-721104-m04 host status = "Running" (err=<nil>)
	I1129 08:40:55.336606  328955 host.go:66] Checking if "ha-721104-m04" exists ...
	I1129 08:40:55.336916  328955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-721104-m04
	I1129 08:40:55.355820  328955 host.go:66] Checking if "ha-721104-m04" exists ...
	I1129 08:40:55.356065  328955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:40:55.356101  328955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-721104-m04
	I1129 08:40:55.373277  328955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/ha-721104-m04/id_rsa Username:docker}
	I1129 08:40:55.471920  328955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:40:55.484059  328955 status.go:176] ha-721104-m04 status: &{Name:ha-721104-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-721104 node start m02 --alsologtostderr -v 5: (7.667808621s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (95.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 stop --alsologtostderr -v 5
E1129 08:41:33.481006  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-721104 stop --alsologtostderr -v 5: (37.295985413s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 start --wait true --alsologtostderr -v 5
E1129 08:42:01.194536  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:01.548683  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:01.555178  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:01.566595  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:01.588050  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:01.629506  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:01.711371  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:01.873678  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:02.195460  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:02.837036  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:04.118322  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:06.679857  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:11.802095  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:42:22.043464  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-721104 start --wait true --alsologtostderr -v 5: (58.401277162s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (95.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 node delete m03 --alsologtostderr -v 5
E1129 08:42:42.524887  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-721104 node delete m03 --alsologtostderr -v 5: (8.623467091s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 stop --alsologtostderr -v 5
E1129 08:43:23.487054  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-721104 stop --alsologtostderr -v 5: (36.017461196s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-721104 status --alsologtostderr -v 5: exit status 7 (118.277812ms)

                                                
                                                
-- stdout --
	ha-721104
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-721104-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-721104-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:43:27.866231  345263 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:43:27.866477  345263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:43:27.866487  345263 out.go:374] Setting ErrFile to fd 2...
	I1129 08:43:27.866492  345263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:43:27.866693  345263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 08:43:27.866891  345263 out.go:368] Setting JSON to false
	I1129 08:43:27.866918  345263 mustload.go:66] Loading cluster: ha-721104
	I1129 08:43:27.866999  345263 notify.go:221] Checking for updates...
	I1129 08:43:27.867264  345263 config.go:182] Loaded profile config "ha-721104": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:43:27.867279  345263 status.go:174] checking status of ha-721104 ...
	I1129 08:43:27.867716  345263 cli_runner.go:164] Run: docker container inspect ha-721104 --format={{.State.Status}}
	I1129 08:43:27.886716  345263 status.go:371] ha-721104 host status = "Stopped" (err=<nil>)
	I1129 08:43:27.886756  345263 status.go:384] host is not running, skipping remaining checks
	I1129 08:43:27.886763  345263 status.go:176] ha-721104 status: &{Name:ha-721104 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:43:27.886789  345263 status.go:174] checking status of ha-721104-m02 ...
	I1129 08:43:27.887047  345263 cli_runner.go:164] Run: docker container inspect ha-721104-m02 --format={{.State.Status}}
	I1129 08:43:27.904402  345263 status.go:371] ha-721104-m02 host status = "Stopped" (err=<nil>)
	I1129 08:43:27.904427  345263 status.go:384] host is not running, skipping remaining checks
	I1129 08:43:27.904447  345263 status.go:176] ha-721104-m02 status: &{Name:ha-721104-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:43:27.904471  345263 status.go:174] checking status of ha-721104-m04 ...
	I1129 08:43:27.904775  345263 cli_runner.go:164] Run: docker container inspect ha-721104-m04 --format={{.State.Status}}
	I1129 08:43:27.921659  345263 status.go:371] ha-721104-m04 host status = "Stopped" (err=<nil>)
	I1129 08:43:27.921680  345263 status.go:384] host is not running, skipping remaining checks
	I1129 08:43:27.921689  345263 status.go:176] ha-721104-m04 status: &{Name:ha-721104-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (50.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-721104 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (49.848645091s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (50.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 node add --control-plane --alsologtostderr -v 5
E1129 08:44:45.409348  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-721104 node add --control-plane --alsologtostderr -v 5: (46.003324791s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-721104 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-569172 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-569172 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (40.810893591s)
--- PASS: TestJSONOutput/start/Command (40.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-569172 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-569172 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-569172 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-569172 --output=json --user=testUser: (5.833518307s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-510285 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-510285 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.330343ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9bb960d6-7ea7-4947-a7be-9f62c54b9903","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-510285] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2c2e62e-b44b-4910-bcb5-d7aa85eb7df5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22000"}}
	{"specversion":"1.0","id":"d3bb40e2-8a4a-49b8-96e5-d0a545538d21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"38d88080-8385-46ef-8fc2-1774bbc1cbc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig"}}
	{"specversion":"1.0","id":"ca0a21bd-9352-4feb-b618-4a83187a6364","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube"}}
	{"specversion":"1.0","id":"eb52a116-3e0a-4a80-9d4c-4b592dc27eb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a24a2025-9024-45e0-a01e-c8d552fb1cd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"aacd87e7-9abe-4785-9262-71f863c5e957","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-510285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-510285
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-475073 --network=
E1129 08:46:33.480949  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-475073 --network=: (36.295957567s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-475073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-475073
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-475073: (2.165586565s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.48s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-007675 --network=bridge
E1129 08:47:01.550066  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-007675 --network=bridge: (23.60412044s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-007675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-007675
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-007675: (2.03182186s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.66s)

                                                
                                    
x
+
TestKicExistingNetwork (23.09s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1129 08:47:11.997516  259483 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1129 08:47:12.015629  259483 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1129 08:47:12.015700  259483 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1129 08:47:12.015728  259483 cli_runner.go:164] Run: docker network inspect existing-network
W1129 08:47:12.031294  259483 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1129 08:47:12.031323  259483 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1129 08:47:12.031345  259483 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1129 08:47:12.031458  259483 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1129 08:47:12.050372  259483 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f69c672bf913 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:26:40:f4:ed:4f:ab} reservation:<nil>}
I1129 08:47:12.050886  259483 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015c86a0}
I1129 08:47:12.050934  259483 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1129 08:47:12.051009  259483 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1129 08:47:12.096713  259483 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-745376 --network=existing-network
E1129 08:47:29.255147  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-745376 --network=existing-network: (20.966259527s)
helpers_test.go:175: Cleaning up "existing-network-745376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-745376
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-745376: (1.994521133s)
I1129 08:47:35.074858  259483 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.09s)

                                                
                                    
x
+
TestKicCustomSubnet (27.52s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-389260 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-389260 --subnet=192.168.60.0/24: (25.305735513s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-389260 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-389260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-389260
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-389260: (2.197694169s)
--- PASS: TestKicCustomSubnet (27.52s)

                                                
                                    
x
+
TestKicStaticIP (27.72s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-055978 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-055978 --static-ip=192.168.200.200: (25.396974432s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-055978 ip
helpers_test.go:175: Cleaning up "static-ip-055978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-055978
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-055978: (2.171030029s)
--- PASS: TestKicStaticIP (27.72s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (46.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-867326 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-867326 --driver=docker  --container-runtime=containerd: (19.929547436s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-869523 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-869523 --driver=docker  --container-runtime=containerd: (20.327309296s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-867326
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-869523
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-869523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-869523
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-869523: (2.417692555s)
helpers_test.go:175: Cleaning up "first-867326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-867326
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-867326: (2.436136687s)
--- PASS: TestMinikubeProfile (46.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-181098 --memory=3072 --mount-string /tmp/TestMountStartserial6215503/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-181098 --memory=3072 --mount-string /tmp/TestMountStartserial6215503/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.692240598s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-181098 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-199841 --memory=3072 --mount-string /tmp/TestMountStartserial6215503/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-199841 --memory=3072 --mount-string /tmp/TestMountStartserial6215503/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.468521019s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-199841 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-181098 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-181098 --alsologtostderr -v=5: (1.71963278s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-199841 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-199841
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-199841: (1.263548212s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-199841
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-199841: (6.932756503s)
--- PASS: TestMountStart/serial/RestartStopped (7.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-199841 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-394169 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-394169 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.474620775s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-394169 -- rollout status deployment/busybox: (3.663462657s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- exec busybox-7b57f96db7-6ldd7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- exec busybox-7b57f96db7-wpf2p -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- exec busybox-7b57f96db7-6ldd7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- exec busybox-7b57f96db7-wpf2p -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- exec busybox-7b57f96db7-6ldd7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- exec busybox-7b57f96db7-wpf2p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- exec busybox-7b57f96db7-6ldd7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- exec busybox-7b57f96db7-6ldd7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- exec busybox-7b57f96db7-wpf2p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-394169 -- exec busybox-7b57f96db7-wpf2p -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-394169 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-394169 -v=5 --alsologtostderr: (26.178646064s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-394169 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 cp testdata/cp-test.txt multinode-394169:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 cp multinode-394169:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile434003027/001/cp-test_multinode-394169.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 cp multinode-394169:/home/docker/cp-test.txt multinode-394169-m02:/home/docker/cp-test_multinode-394169_multinode-394169-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169-m02 "sudo cat /home/docker/cp-test_multinode-394169_multinode-394169-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 cp multinode-394169:/home/docker/cp-test.txt multinode-394169-m03:/home/docker/cp-test_multinode-394169_multinode-394169-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169-m03 "sudo cat /home/docker/cp-test_multinode-394169_multinode-394169-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 cp testdata/cp-test.txt multinode-394169-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 cp multinode-394169-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile434003027/001/cp-test_multinode-394169-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 cp multinode-394169-m02:/home/docker/cp-test.txt multinode-394169:/home/docker/cp-test_multinode-394169-m02_multinode-394169.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169 "sudo cat /home/docker/cp-test_multinode-394169-m02_multinode-394169.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 cp multinode-394169-m02:/home/docker/cp-test.txt multinode-394169-m03:/home/docker/cp-test_multinode-394169-m02_multinode-394169-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169-m03 "sudo cat /home/docker/cp-test_multinode-394169-m02_multinode-394169-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 cp testdata/cp-test.txt multinode-394169-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 cp multinode-394169-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile434003027/001/cp-test_multinode-394169-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 cp multinode-394169-m03:/home/docker/cp-test.txt multinode-394169:/home/docker/cp-test_multinode-394169-m03_multinode-394169.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169 "sudo cat /home/docker/cp-test_multinode-394169-m03_multinode-394169.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 cp multinode-394169-m03:/home/docker/cp-test.txt multinode-394169-m02:/home/docker/cp-test_multinode-394169-m03_multinode-394169-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 ssh -n multinode-394169-m02 "sudo cat /home/docker/cp-test_multinode-394169-m03_multinode-394169-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 node stop m03
E1129 08:51:33.481066  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-394169 node stop m03: (1.266471511s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-394169 status: exit status 7 (518.270338ms)

                                                
                                                
-- stdout --
	multinode-394169
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-394169-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-394169-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-394169 status --alsologtostderr: exit status 7 (508.493962ms)

                                                
                                                
-- stdout --
	multinode-394169
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-394169-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-394169-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:51:34.582506  407359 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:51:34.582807  407359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:51:34.582818  407359 out.go:374] Setting ErrFile to fd 2...
	I1129 08:51:34.582825  407359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:51:34.583071  407359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 08:51:34.583275  407359 out.go:368] Setting JSON to false
	I1129 08:51:34.583307  407359 mustload.go:66] Loading cluster: multinode-394169
	I1129 08:51:34.583413  407359 notify.go:221] Checking for updates...
	I1129 08:51:34.583683  407359 config.go:182] Loaded profile config "multinode-394169": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:51:34.583701  407359 status.go:174] checking status of multinode-394169 ...
	I1129 08:51:34.584225  407359 cli_runner.go:164] Run: docker container inspect multinode-394169 --format={{.State.Status}}
	I1129 08:51:34.603237  407359 status.go:371] multinode-394169 host status = "Running" (err=<nil>)
	I1129 08:51:34.603266  407359 host.go:66] Checking if "multinode-394169" exists ...
	I1129 08:51:34.603574  407359 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-394169
	I1129 08:51:34.621686  407359 host.go:66] Checking if "multinode-394169" exists ...
	I1129 08:51:34.621983  407359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:51:34.622055  407359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-394169
	I1129 08:51:34.639601  407359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/multinode-394169/id_rsa Username:docker}
	I1129 08:51:34.738296  407359 ssh_runner.go:195] Run: systemctl --version
	I1129 08:51:34.744679  407359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:51:34.757362  407359 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 08:51:34.817577  407359 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-29 08:51:34.807999773 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 08:51:34.818390  407359 kubeconfig.go:125] found "multinode-394169" server: "https://192.168.67.2:8443"
	I1129 08:51:34.818430  407359 api_server.go:166] Checking apiserver status ...
	I1129 08:51:34.818477  407359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:51:34.830636  407359 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1297/cgroup
	W1129 08:51:34.839366  407359 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1297/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1129 08:51:34.839414  407359 ssh_runner.go:195] Run: ls
	I1129 08:51:34.843122  407359 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1129 08:51:34.847160  407359 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1129 08:51:34.847180  407359 status.go:463] multinode-394169 apiserver status = Running (err=<nil>)
	I1129 08:51:34.847189  407359 status.go:176] multinode-394169 status: &{Name:multinode-394169 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:51:34.847206  407359 status.go:174] checking status of multinode-394169-m02 ...
	I1129 08:51:34.847449  407359 cli_runner.go:164] Run: docker container inspect multinode-394169-m02 --format={{.State.Status}}
	I1129 08:51:34.864694  407359 status.go:371] multinode-394169-m02 host status = "Running" (err=<nil>)
	I1129 08:51:34.864715  407359 host.go:66] Checking if "multinode-394169-m02" exists ...
	I1129 08:51:34.864976  407359 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-394169-m02
	I1129 08:51:34.881724  407359 host.go:66] Checking if "multinode-394169-m02" exists ...
	I1129 08:51:34.881991  407359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:51:34.882033  407359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-394169-m02
	I1129 08:51:34.898951  407359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/22000-255825/.minikube/machines/multinode-394169-m02/id_rsa Username:docker}
	I1129 08:51:34.998290  407359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:51:35.010624  407359 status.go:176] multinode-394169-m02 status: &{Name:multinode-394169-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:51:35.010665  407359 status.go:174] checking status of multinode-394169-m03 ...
	I1129 08:51:35.010971  407359 cli_runner.go:164] Run: docker container inspect multinode-394169-m03 --format={{.State.Status}}
	I1129 08:51:35.029413  407359 status.go:371] multinode-394169-m03 host status = "Stopped" (err=<nil>)
	I1129 08:51:35.029456  407359 status.go:384] host is not running, skipping remaining checks
	I1129 08:51:35.029463  407359 status.go:176] multinode-394169-m03 status: &{Name:multinode-394169-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-394169 node start m03 -v=5 --alsologtostderr: (6.22304502s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-394169
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-394169
E1129 08:52:01.548032  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-394169: (25.084843971s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-394169 --wait=true -v=5 --alsologtostderr
E1129 08:52:56.556024  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-394169 --wait=true -v=5 --alsologtostderr: (53.518623025s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-394169
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-394169 node delete m03: (4.711936911s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-394169 stop: (23.831810166s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-394169 status: exit status 7 (105.421369ms)

                                                
                                                
-- stdout --
	multinode-394169
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-394169-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-394169 status --alsologtostderr: exit status 7 (102.381707ms)

                                                
                                                
-- stdout --
	multinode-394169
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-394169-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:53:30.076158  417128 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:53:30.076421  417128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:53:30.076429  417128 out.go:374] Setting ErrFile to fd 2...
	I1129 08:53:30.076433  417128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:53:30.076634  417128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 08:53:30.076819  417128 out.go:368] Setting JSON to false
	I1129 08:53:30.076845  417128 mustload.go:66] Loading cluster: multinode-394169
	I1129 08:53:30.076988  417128 notify.go:221] Checking for updates...
	I1129 08:53:30.077189  417128 config.go:182] Loaded profile config "multinode-394169": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:53:30.077205  417128 status.go:174] checking status of multinode-394169 ...
	I1129 08:53:30.077685  417128 cli_runner.go:164] Run: docker container inspect multinode-394169 --format={{.State.Status}}
	I1129 08:53:30.097194  417128 status.go:371] multinode-394169 host status = "Stopped" (err=<nil>)
	I1129 08:53:30.097241  417128 status.go:384] host is not running, skipping remaining checks
	I1129 08:53:30.097251  417128 status.go:176] multinode-394169 status: &{Name:multinode-394169 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:53:30.097287  417128 status.go:174] checking status of multinode-394169-m02 ...
	I1129 08:53:30.097576  417128 cli_runner.go:164] Run: docker container inspect multinode-394169-m02 --format={{.State.Status}}
	I1129 08:53:30.115965  417128 status.go:371] multinode-394169-m02 host status = "Stopped" (err=<nil>)
	I1129 08:53:30.115993  417128 status.go:384] host is not running, skipping remaining checks
	I1129 08:53:30.116000  417128 status.go:176] multinode-394169-m02 status: &{Name:multinode-394169-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-394169 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-394169 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.013572655s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-394169 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.64s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-394169
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-394169-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-394169-m02 --driver=docker  --container-runtime=containerd: exit status 14 (82.2319ms)

                                                
                                                
-- stdout --
	* [multinode-394169-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-394169-m02' is duplicated with machine name 'multinode-394169-m02' in profile 'multinode-394169'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-394169-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-394169-m03 --driver=docker  --container-runtime=containerd: (23.389062775s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-394169
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-394169: exit status 80 (310.392952ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-394169 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-394169-m03 already exists in multinode-394169-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-394169-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-394169-m03: (2.394068305s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.24s)

                                                
                                    
x
+
TestPreload (110.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-964509 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-964509 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd: (56.209210537s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-964509 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-964509 image pull gcr.io/k8s-minikube/busybox: (2.609331567s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-964509
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-964509: (5.747944277s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-964509 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1129 08:56:33.480934  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-964509 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (43.604622875s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-964509 image list
helpers_test.go:175: Cleaning up "test-preload-964509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-964509
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-964509: (2.445149716s)
--- PASS: TestPreload (110.85s)

                                                
                                    
x
+
TestScheduledStopUnix (96.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-479261 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-479261 --memory=3072 --driver=docker  --container-runtime=containerd: (20.194037878s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-479261 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 08:56:59.409942  435453 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:56:59.410081  435453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:56:59.410092  435453 out.go:374] Setting ErrFile to fd 2...
	I1129 08:56:59.410096  435453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:56:59.410301  435453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 08:56:59.410553  435453 out.go:368] Setting JSON to false
	I1129 08:56:59.410641  435453 mustload.go:66] Loading cluster: scheduled-stop-479261
	I1129 08:56:59.411010  435453 config.go:182] Loaded profile config "scheduled-stop-479261": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:56:59.411075  435453 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/config.json ...
	I1129 08:56:59.411261  435453 mustload.go:66] Loading cluster: scheduled-stop-479261
	I1129 08:56:59.411360  435453 config.go:182] Loaded profile config "scheduled-stop-479261": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-479261 -n scheduled-stop-479261
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-479261 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 08:56:59.814402  435605 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:56:59.814517  435605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:56:59.814526  435605 out.go:374] Setting ErrFile to fd 2...
	I1129 08:56:59.814530  435605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:56:59.814704  435605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 08:56:59.814964  435605 out.go:368] Setting JSON to false
	I1129 08:56:59.815150  435605 daemonize_unix.go:73] killing process 435488 as it is an old scheduled stop
	I1129 08:56:59.815264  435605 mustload.go:66] Loading cluster: scheduled-stop-479261
	I1129 08:56:59.815642  435605 config.go:182] Loaded profile config "scheduled-stop-479261": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:56:59.815717  435605 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/config.json ...
	I1129 08:56:59.815918  435605 mustload.go:66] Loading cluster: scheduled-stop-479261
	I1129 08:56:59.816034  435605 config.go:182] Loaded profile config "scheduled-stop-479261": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1129 08:56:59.821226  259483 retry.go:31] will retry after 85.628µs: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.822427  259483 retry.go:31] will retry after 174.368µs: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.823571  259483 retry.go:31] will retry after 312.782µs: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.824701  259483 retry.go:31] will retry after 240.836µs: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.825839  259483 retry.go:31] will retry after 565.507µs: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.826963  259483 retry.go:31] will retry after 744.211µs: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.828088  259483 retry.go:31] will retry after 894.923µs: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.829211  259483 retry.go:31] will retry after 2.175911ms: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.832410  259483 retry.go:31] will retry after 2.273457ms: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.835640  259483 retry.go:31] will retry after 5.583665ms: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.841891  259483 retry.go:31] will retry after 4.309494ms: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.847128  259483 retry.go:31] will retry after 6.768405ms: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.854344  259483 retry.go:31] will retry after 19.214342ms: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.874626  259483 retry.go:31] will retry after 18.130733ms: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.893888  259483 retry.go:31] will retry after 15.155023ms: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
I1129 08:56:59.910173  259483 retry.go:31] will retry after 37.299029ms: open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-479261 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1129 08:57:01.548266  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-479261 -n scheduled-stop-479261
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-479261
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-479261 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 08:57:25.744529  436481 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:57:25.744828  436481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:57:25.744839  436481 out.go:374] Setting ErrFile to fd 2...
	I1129 08:57:25.744844  436481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:57:25.745046  436481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 08:57:25.745286  436481 out.go:368] Setting JSON to false
	I1129 08:57:25.745365  436481 mustload.go:66] Loading cluster: scheduled-stop-479261
	I1129 08:57:25.745682  436481 config.go:182] Loaded profile config "scheduled-stop-479261": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 08:57:25.745779  436481 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/scheduled-stop-479261/config.json ...
	I1129 08:57:25.745988  436481 mustload.go:66] Loading cluster: scheduled-stop-479261
	I1129 08:57:25.746090  436481 config.go:182] Loaded profile config "scheduled-stop-479261": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-479261
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-479261: exit status 7 (84.540703ms)

                                                
                                                
-- stdout --
	scheduled-stop-479261
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-479261 -n scheduled-stop-479261
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-479261 -n scheduled-stop-479261: exit status 7 (83.635704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-479261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-479261
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-479261: (5.138575755s)
--- PASS: TestScheduledStopUnix (96.92s)

                                                
                                    
x
+
TestInsufficientStorage (9.3s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-286711 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-286711 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.806847105s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ed6c4979-65bf-49cb-ab5a-a8474633ff75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-286711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2cd29439-5afe-4f2b-9323-e2728a4a52d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22000"}}
	{"specversion":"1.0","id":"39b9e1a0-0bf7-49af-a4c9-f75e98bc5854","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"78bad7b8-5ce4-459e-9574-416d5e3dcea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig"}}
	{"specversion":"1.0","id":"2434504b-0979-4222-ba81-3278c05b8b3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube"}}
	{"specversion":"1.0","id":"7b6eecf4-10c3-4286-aa60-f3c62f6b3d74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"aa5393eb-4484-4e4a-ac09-f8a7d9298a4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5259e033-7bda-4cd6-b504-b6a459d9a3a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ab543c1d-03d6-45b2-ac58-d8e47dd41a87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"021a77a6-b1d3-4e76-8b58-77ea2ceb1cd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cda6bf93-ccce-45ce-8365-5c19d770cd4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0a1c5e87-c03d-4d04-99ec-1e4135babb68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-286711\" primary control-plane node in \"insufficient-storage-286711\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f06148bd-3d66-4165-be6f-e536d7e65732","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"28a2da70-f9a3-40cf-ad85-80fa042f6953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed59febf-d1bd-44b7-afa5-7d3e91f656ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-286711 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-286711 --output=json --layout=cluster: exit status 7 (297.076394ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-286711","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-286711","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1129 08:58:23.161837  438762 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-286711" does not appear in /home/jenkins/minikube-integration/22000-255825/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-286711 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-286711 --output=json --layout=cluster: exit status 7 (300.936705ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-286711","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-286711","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1129 08:58:23.463691  438875 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-286711" does not appear in /home/jenkins/minikube-integration/22000-255825/kubeconfig
	E1129 08:58:23.474257  438875 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/insufficient-storage-286711/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-286711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-286711
E1129 08:58:24.617062  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-286711: (1.894263255s)
--- PASS: TestInsufficientStorage (9.30s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (52.3s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.860708092 start -p running-upgrade-212727 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.860708092 start -p running-upgrade-212727 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (25.650136828s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-212727 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-212727 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.970186616s)
helpers_test.go:175: Cleaning up "running-upgrade-212727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-212727
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-212727: (2.014797646s)
--- PASS: TestRunningBinaryUpgrade (52.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (324.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (24.593484571s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-806701
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-806701: (1.945544199s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-806701 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-806701 status --format={{.Host}}: exit status 7 (87.534714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m42.308834776s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-806701 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (98.556155ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-806701] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-806701
	    minikube start -p kubernetes-upgrade-806701 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8067012 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-806701 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-806701 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (13.263889027s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-806701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-806701
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-806701: (2.489588495s)
--- PASS: TestKubernetesUpgrade (324.86s)

                                                
                                    
x
+
TestMissingContainerUpgrade (118.51s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3939940977 start -p missing-upgrade-928540 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3939940977 start -p missing-upgrade-928540 --memory=3072 --driver=docker  --container-runtime=containerd: (52.573381735s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-928540
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-928540: (1.621409994s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-928540
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-928540 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-928540 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.534177489s)
helpers_test.go:175: Cleaning up "missing-upgrade-928540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-928540
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-928540: (3.82995421s)
--- PASS: TestMissingContainerUpgrade (118.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-870063 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-870063 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (94.979361ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-870063] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-870063 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-870063 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.69845103s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-870063 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-870063 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-870063 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (5.699744438s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-870063 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-870063 status -o json: exit status 2 (362.95911ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-870063","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-870063
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-870063: (4.271917754s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-870063 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-870063 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.236755636s)
--- PASS: TestNoKubernetes/serial/Start (7.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22000-255825/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-870063 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-870063 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.572019ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (2.504293075s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (4.031380547s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (47.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.4177801297 start -p stopped-upgrade-209502 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.4177801297 start -p stopped-upgrade-209502 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (23.228851355s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.4177801297 -p stopped-upgrade-209502 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.4177801297 -p stopped-upgrade-209502 stop: (4.381910193s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-209502 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-209502 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.078544536s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (47.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-870063
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-870063: (2.383040267s)
--- PASS: TestNoKubernetes/serial/Stop (2.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-870063 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-870063 --driver=docker  --container-runtime=containerd: (7.289785506s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-870063 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-870063 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.073135ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-209502
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-209502: (1.209320453s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestPause/serial/Start (42.04s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-563162 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-563162 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (42.042656344s)
--- PASS: TestPause/serial/Start (42.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-770004 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-770004 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (225.232122ms)

                                                
                                                
-- stdout --
	* [false-770004] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:00:28.072957  477694 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:00:28.073097  477694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:00:28.073110  477694 out.go:374] Setting ErrFile to fd 2...
	I1129 09:00:28.073116  477694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:00:28.073484  477694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-255825/.minikube/bin
	I1129 09:00:28.074208  477694 out.go:368] Setting JSON to false
	I1129 09:00:28.075683  477694 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6172,"bootTime":1764400656,"procs":305,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:00:28.075772  477694 start.go:143] virtualization: kvm guest
	I1129 09:00:28.077976  477694 out.go:179] * [false-770004] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:00:28.079233  477694 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:00:28.079278  477694 notify.go:221] Checking for updates...
	I1129 09:00:28.081534  477694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:00:28.083278  477694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-255825/kubeconfig
	I1129 09:00:28.084788  477694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-255825/.minikube
	I1129 09:00:28.090055  477694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:00:28.091432  477694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:00:28.093412  477694 config.go:182] Loaded profile config "kubernetes-upgrade-806701": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:00:28.093577  477694 config.go:182] Loaded profile config "pause-563162": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1129 09:00:28.093660  477694 config.go:182] Loaded profile config "running-upgrade-212727": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.0
	I1129 09:00:28.093818  477694 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:00:28.124344  477694 docker.go:124] docker version: linux-29.1.1:Docker Engine - Community
	I1129 09:00:28.124535  477694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1129 09:00:28.203022  477694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-29 09:00:28.190354563 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:29.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1129 09:00:28.203132  477694 docker.go:319] overlay module found
	I1129 09:00:28.204869  477694 out.go:179] * Using the docker driver based on user configuration
	I1129 09:00:28.205870  477694 start.go:309] selected driver: docker
	I1129 09:00:28.205888  477694 start.go:927] validating driver "docker" against <nil>
	I1129 09:00:28.205904  477694 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:00:28.207521  477694 out.go:203] 
	W1129 09:00:28.208685  477694 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1129 09:00:28.209838  477694 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-770004 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-770004

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-770004

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-770004

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-770004

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-770004

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-770004

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-770004

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-770004

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-770004

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-770004

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-770004

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-770004" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-770004" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:00:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-806701
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:00:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-212727
contexts:
- context:
cluster: kubernetes-upgrade-806701
user: kubernetes-upgrade-806701
name: kubernetes-upgrade-806701
- context:
cluster: running-upgrade-212727
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:00:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: running-upgrade-212727
name: running-upgrade-212727
current-context: running-upgrade-212727
kind: Config
users:
- name: kubernetes-upgrade-806701
user:
client-certificate: /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/kubernetes-upgrade-806701/client.crt
client-key: /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/kubernetes-upgrade-806701/client.key
- name: running-upgrade-212727
user:
client-certificate: /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/running-upgrade-212727/client.crt
client-key: /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/running-upgrade-212727/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-770004

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-770004"

                                                
                                                
----------------------- debugLogs end: false-770004 [took: 3.772089944s] --------------------------------
helpers_test.go:175: Cleaning up "false-770004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-770004
--- PASS: TestNetworkPlugins/group/false (4.17s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.16s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-563162 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-563162 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.142626563s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.16s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-563162 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-563162 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-563162 --output=json --layout=cluster: exit status 2 (351.037308ms)

                                                
                                                
-- stdout --
	{"Name":"pause-563162","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-563162","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-563162 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.74s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-563162 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.74s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.68s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-563162 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-563162 --alsologtostderr -v=5: (3.683704885s)
--- PASS: TestPause/serial/DeletePaused (3.68s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.76s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.708648419s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-563162
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-563162: exit status 1 (16.755651ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-563162: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (47.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-295154 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-295154 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (47.768229454s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (47.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-924441 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1129 09:01:33.480451  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:02:01.548178  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/functional-036665/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-924441 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (54.210874313s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-295154 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-295154 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-295154 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-295154 --alsologtostderr -v=3: (12.023405964s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-924441 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-924441 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-924441 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-924441 --alsologtostderr -v=3: (12.010794183s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-295154 -n old-k8s-version-295154
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-295154 -n old-k8s-version-295154: exit status 7 (81.402323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-295154 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-295154 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-295154 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (45.003415102s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-295154 -n old-k8s-version-295154
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-924441 -n no-preload-924441
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-924441 -n no-preload-924441: exit status 7 (87.370205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-924441 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-924441 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-924441 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (44.242920625s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-924441 -n no-preload-924441
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wvnjh" [58381505-cc49-46db-b1dd-9fed5fb295b2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003531422s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-wvnjh" [58381505-cc49-46db-b1dd-9fed5fb295b2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003350666s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-295154 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2flgl" [ff2c8283-3bb9-4693-a1eb-799f04546cc0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004039228s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-295154 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-295154 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-295154 -n old-k8s-version-295154
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-295154 -n old-k8s-version-295154: exit status 2 (322.108682ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-295154 -n old-k8s-version-295154
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-295154 -n old-k8s-version-295154: exit status 2 (317.486339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-295154 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-295154 -n old-k8s-version-295154
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-295154 -n old-k8s-version-295154
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2flgl" [ff2c8283-3bb9-4693-a1eb-799f04546cc0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003083424s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-924441 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-976238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-976238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (41.406661822s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-924441 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-924441 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-924441 -n no-preload-924441
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-924441 -n no-preload-924441: exit status 2 (337.49301ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-924441 -n no-preload-924441
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-924441 -n no-preload-924441: exit status 2 (327.856315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-924441 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-924441 --alsologtostderr -v=1: (1.013253116s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-924441 -n no-preload-924441
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-924441 -n no-preload-924441
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-357829 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-357829 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (44.418545906s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (32.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-106601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-106601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (32.359853388s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (32.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (40.828806795s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-106601 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-106601 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-106601 --alsologtostderr -v=3: (1.339076298s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-976238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-976238 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-106601 -n newest-cni-106601
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-106601 -n newest-cni-106601: exit status 7 (87.409734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-106601 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-106601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-106601 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (10.775164499s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-106601 -n newest-cni-106601
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-976238 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-976238 --alsologtostderr -v=3: (12.068857181s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-106601 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-106601 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-106601 -n newest-cni-106601
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-106601 -n newest-cni-106601: exit status 2 (400.845004ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-106601 -n newest-cni-106601
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-106601 -n newest-cni-106601: exit status 2 (454.380901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-106601 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-106601 -n newest-cni-106601
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-106601 -n newest-cni-106601
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-357829 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-357829 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-976238 -n embed-certs-976238
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-976238 -n embed-certs-976238: exit status 7 (119.803064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-976238 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-976238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-976238 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (44.257088301s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-976238 -n embed-certs-976238
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-357829 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-357829 --alsologtostderr -v=3: (12.127899171s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (43.291414367s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829: exit status 7 (96.600072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-357829 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-357829 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-357829 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (48.659100984s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-770004 "pgrep -a kubelet"
I1129 09:05:13.845430  259483 config.go:182] Loaded profile config "auto-770004": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-770004 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mdcds" [206313d7-1796-4d35-bd42-2a0abaf905fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mdcds" [206313d7-1796-4d35-bd42-2a0abaf905fb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004491351s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-770004 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-57hgh" [9cc46149-6857-405c-b9ca-8be96c657bc3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00419984s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-skkzm" [e5df5394-657a-41e3-80b8-4bdc431f659b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004221202s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (52.610873425s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-57hgh" [9cc46149-6857-405c-b9ca-8be96c657bc3] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010474833s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-976238 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-770004 "pgrep -a kubelet"
I1129 09:05:48.576843  259483 config.go:182] Loaded profile config "kindnet-770004": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-770004 replace --force -f testdata/netcat-deployment.yaml
I1129 09:05:49.096630  259483 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f67gv" [97a0b1c6-2b78-4b11-b670-d611d522f290] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f67gv" [97a0b1c6-2b78-4b11-b670-d611d522f290] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.003932702s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-976238 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-976238 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-976238 -n embed-certs-976238
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-976238 -n embed-certs-976238: exit status 2 (350.965312ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-976238 -n embed-certs-976238
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-976238 -n embed-certs-976238: exit status 2 (365.55364ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-976238 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-976238 -n embed-certs-976238
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-976238 -n embed-certs-976238
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ptwp8" [f3eba19d-22bd-483e-813c-db09e051aee5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003918675s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.749431674s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-770004 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ptwp8" [f3eba19d-22bd-483e-813c-db09e051aee5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004163485s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-357829 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-357829 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-357829 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829: exit status 2 (409.18508ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829: exit status 2 (473.685978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-357829 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-357829 -n default-k8s-diff-port-357829
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.75s)
E1129 09:07:23.796031  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:23.802391  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:23.813712  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:23.835091  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:23.876511  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:23.957955  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:24.119513  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:24.441692  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:25.083671  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/no-preload-924441/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.235600911s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1129 09:06:33.480499  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/addons-509184/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m12.314215441s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-z4kx9" [cce6aed8-9dce-4e19-aa51-a8226aca17e3] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-z4kx9" [cce6aed8-9dce-4e19-aa51-a8226aca17e3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004601762s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-770004 "pgrep -a kubelet"
I1129 09:06:41.731972  259483 config.go:182] Loaded profile config "calico-770004": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-770004 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-59czl" [002206e0-e9a5-44f0-bf92-3ef0fe8c58e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-59czl" [002206e0-e9a5-44f0-bf92-3ef0fe8c58e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003480536s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-770004 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-770004 "pgrep -a kubelet"
I1129 09:06:52.401322  259483 config.go:182] Loaded profile config "custom-flannel-770004": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-770004 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zdqp5" [9063be52-3897-4676-900f-5bcc8131d9d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zdqp5" [9063be52-3897-4676-900f-5bcc8131d9d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004373179s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-770004 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-7fl5b" [8321194d-e703-41d3-833d-651d2e69fb75] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003953103s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (63.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-770004 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m3.772281167s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (63.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-770004 "pgrep -a kubelet"
I1129 09:07:15.014170  259483 config.go:182] Loaded profile config "flannel-770004": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-770004 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b6zz2" [37fd114c-d756-465e-9d68-25a6cd8688e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1129 09:07:16.132020  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:16.138455  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:16.149811  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:16.171180  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:16.213057  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:16.294495  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:16.455817  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:16.777243  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:17.419188  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:07:18.701714  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-b6zz2" [37fd114c-d756-465e-9d68-25a6cd8688e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003421968s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-770004 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-770004 "pgrep -a kubelet"
I1129 09:07:35.526240  259483 config.go:182] Loaded profile config "bridge-770004": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-770004 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gncrf" [8307bb8a-f5b1-4f63-b2a4-3033c5a26175] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1129 09:07:36.627297  259483 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/old-k8s-version-295154/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-gncrf" [8307bb8a-f5b1-4f63-b2a4-3033c5a26175] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005380845s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-770004 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-770004 "pgrep -a kubelet"
I1129 09:08:15.960499  259483 config.go:182] Loaded profile config "enable-default-cni-770004": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-770004 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nqdjx" [758dbb74-498d-4420-9b20-45d2ef3af292] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nqdjx" [758dbb74-498d-4420-9b20-45d2ef3af292] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003803869s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-770004 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-770004 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    

Test skip (26/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-286131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-286131
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-770004 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-770004

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-770004

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-770004

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-770004

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-770004

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-770004

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-770004

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-770004

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-770004

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-770004

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-770004

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-770004" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-770004" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:00:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-806701
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:00:12 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-212727
contexts:
- context:
cluster: kubernetes-upgrade-806701
user: kubernetes-upgrade-806701
name: kubernetes-upgrade-806701
- context:
cluster: running-upgrade-212727
user: running-upgrade-212727
name: running-upgrade-212727
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-806701
user:
client-certificate: /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/kubernetes-upgrade-806701/client.crt
client-key: /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/kubernetes-upgrade-806701/client.key
- name: running-upgrade-212727
user:
client-certificate: /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/running-upgrade-212727/client.crt
client-key: /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/running-upgrade-212727/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-770004

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-770004"

                                                
                                                
----------------------- debugLogs end: kubenet-770004 [took: 3.92108828s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-770004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-770004
--- SKIP: TestNetworkPlugins/group/kubenet (4.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-770004 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-770004" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-255825/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:00:01 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-806701
contexts:
- context:
cluster: kubernetes-upgrade-806701
user: kubernetes-upgrade-806701
name: kubernetes-upgrade-806701
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-806701
user:
client-certificate: /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/kubernetes-upgrade-806701/client.crt
client-key: /home/jenkins/minikube-integration/22000-255825/.minikube/profiles/kubernetes-upgrade-806701/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-770004

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-770004" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-770004"

                                                
                                                
----------------------- debugLogs end: cilium-770004 [took: 5.968070129s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-770004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-770004
--- SKIP: TestNetworkPlugins/group/cilium (6.18s)

                                                
                                    
Copied to clipboard