Test Report: Docker_Linux_containerd 21934

                    
                      0ee4f00f81c855d6dbc5c3cb2cb1b494940d38dc:2025-11-22:42437
                    
                

Test fail (4/333)

Order failed test Duration
305 TestStartStop/group/old-k8s-version/serial/DeployApp 13.36
306 TestStartStop/group/no-preload/serial/DeployApp 13.15
315 TestStartStop/group/embed-certs/serial/DeployApp 13.66
353 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.76
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-462319 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [89dd9411-148d-4a8e-98d3-a51a8eab9d35] Pending
helpers_test.go:352: "busybox" [89dd9411-148d-4a8e-98d3-a51a8eab9d35] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [89dd9411-148d-4a8e-98d3-a51a8eab9d35] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004358505s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-462319 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-462319
helpers_test.go:243: (dbg) docker inspect old-k8s-version-462319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66",
	        "Created": "2025-11-22T00:19:16.365495044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248707,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:19:16.402958348Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66/hostname",
	        "HostsPath": "/var/lib/docker/containers/60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66/hosts",
	        "LogPath": "/var/lib/docker/containers/60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66/60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66-json.log",
	        "Name": "/old-k8s-version-462319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-462319:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-462319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66",
	                "LowerDir": "/var/lib/docker/overlay2/6ca06b58ff047715f101193d0f051e92ffb3bb47f4e9e98de16e3d4c7f58beb1-init/diff:/var/lib/docker/overlay2/4b4af9a4e857911a6b5096aeeaee227ee7577c6eff3b08bbb4e765c49ed2fb70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ca06b58ff047715f101193d0f051e92ffb3bb47f4e9e98de16e3d4c7f58beb1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ca06b58ff047715f101193d0f051e92ffb3bb47f4e9e98de16e3d4c7f58beb1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ca06b58ff047715f101193d0f051e92ffb3bb47f4e9e98de16e3d4c7f58beb1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-462319",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-462319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-462319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-462319",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-462319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b6589169c31c78bfea6577019ea30ba0adadee1467810b9b1a0b1b8b4a97b9f5",
	            "SandboxKey": "/var/run/docker/netns/b6589169c31c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-462319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "08252eaaf7e532efc839aa6b0c4ce7bea14dc3e5057df8085e81eab6e1e46265",
	                    "EndpointID": "d132fdb6f6e769e175e9e69bd315da82881eb4351a6b66ae2fe24784dbabd3ac",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6e:0f:4c:be:16:ac",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-462319",
	                        "60eae3b63b81"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-462319 -n old-k8s-version-462319
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-462319 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-462319 logs -n 25: (1.155373591s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-687868 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo containerd config dump                                                                                                                                                                                                        │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo crio config                                                                                                                                                                                                                   │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ delete  │ -p cilium-687868                                                                                                                                                                                                                                    │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p old-k8s-version-462319 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ ssh     │ -p NoKubernetes-714059 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ start   │ -p cert-expiration-427330 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-427330 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ delete  │ -p cert-expiration-427330                                                                                                                                                                                                                           │ cert-expiration-427330 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p no-preload-781232 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ stop    │ -p NoKubernetes-714059                                                                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p NoKubernetes-714059 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ ssh     │ -p NoKubernetes-714059 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ delete  │ -p NoKubernetes-714059                                                                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ start   │ -p embed-certs-491677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-491677     │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:20:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:20:01.497017  260527 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:20:01.497324  260527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:01.497336  260527 out.go:374] Setting ErrFile to fd 2...
	I1122 00:20:01.497340  260527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:01.497588  260527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:20:01.498054  260527 out.go:368] Setting JSON to false
	I1122 00:20:01.499443  260527 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3740,"bootTime":1763767061,"procs":385,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:20:01.499503  260527 start.go:143] virtualization: kvm guest
	I1122 00:20:01.501458  260527 out.go:179] * [embed-certs-491677] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:20:01.503562  260527 notify.go:221] Checking for updates...
	I1122 00:20:01.503572  260527 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:20:01.505088  260527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:20:01.506758  260527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:20:01.508287  260527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1122 00:20:01.509699  260527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:20:01.511183  260527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:20:01.513382  260527 config.go:182] Loaded profile config "kubernetes-upgrade-882262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:01.513541  260527 config.go:182] Loaded profile config "no-preload-781232": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:01.513638  260527 config.go:182] Loaded profile config "old-k8s-version-462319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1122 00:20:01.513752  260527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:20:01.545401  260527 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:20:01.545504  260527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:01.611105  260527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-22 00:20:01.601298329 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:01.611234  260527 docker.go:319] overlay module found
	I1122 00:20:01.613226  260527 out.go:179] * Using the docker driver based on user configuration
	I1122 00:20:01.614649  260527 start.go:309] selected driver: docker
	I1122 00:20:01.614666  260527 start.go:930] validating driver "docker" against <nil>
	I1122 00:20:01.614677  260527 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:20:01.615350  260527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:01.674666  260527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:20:01.664354692 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:01.674876  260527 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:20:01.675176  260527 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:01.676975  260527 out.go:179] * Using Docker driver with root privileges
	I1122 00:20:01.678251  260527 cni.go:84] Creating CNI manager for ""
	I1122 00:20:01.678367  260527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:20:01.678383  260527 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:20:01.678481  260527 start.go:353] cluster config:
	{Name:embed-certs-491677 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:20:01.680036  260527 out.go:179] * Starting "embed-certs-491677" primary control-plane node in "embed-certs-491677" cluster
	I1122 00:20:01.683810  260527 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:20:01.685242  260527 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:20:01.686680  260527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:01.686729  260527 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1122 00:20:01.686743  260527 cache.go:65] Caching tarball of preloaded images
	I1122 00:20:01.686775  260527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:20:01.686916  260527 preload.go:238] Found /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 00:20:01.686942  260527 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1122 00:20:01.687116  260527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/config.json ...
	I1122 00:20:01.687148  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/config.json: {Name:mkf02d672882aad1c3b94e79745f8cf62e3f5b13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:01.708872  260527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:20:01.708897  260527 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:20:01.708914  260527 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:20:01.708943  260527 start.go:360] acquireMachinesLock for embed-certs-491677: {Name:mkbe59d49caffedca862a9ecb177d8d82196efdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:01.709044  260527 start.go:364] duration metric: took 84.98µs to acquireMachinesLock for "embed-certs-491677"
	I1122 00:20:01.709067  260527 start.go:93] Provisioning new machine with config: &{Name:embed-certs-491677 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:20:01.709131  260527 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:19:58.829298  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:19:58.829759  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:19:58.829815  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:19:58.829864  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:19:58.856999  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:19:58.857027  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:19:58.857033  218693 cri.go:89] found id: ""
	I1122 00:19:58.857044  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:19:58.857093  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.861107  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.865268  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:19:58.865337  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:19:58.892542  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:19:58.892564  218693 cri.go:89] found id: ""
	I1122 00:19:58.892572  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:19:58.892626  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.896771  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:19:58.896846  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:19:58.925628  218693 cri.go:89] found id: ""
	I1122 00:19:58.925652  218693 logs.go:282] 0 containers: []
	W1122 00:19:58.925660  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:19:58.925666  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:19:58.925724  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:19:58.955304  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:19:58.955326  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:19:58.955332  218693 cri.go:89] found id: ""
	I1122 00:19:58.955340  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:19:58.955397  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.959396  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.963562  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:19:58.963626  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:19:58.991860  218693 cri.go:89] found id: ""
	I1122 00:19:58.991883  218693 logs.go:282] 0 containers: []
	W1122 00:19:58.991890  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:19:58.991895  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:19:58.991949  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:19:59.020457  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:19:59.020483  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:19:59.020489  218693 cri.go:89] found id: ""
	I1122 00:19:59.020502  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:19:59.020550  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:59.024967  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:59.031778  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:19:59.031854  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:19:59.061726  218693 cri.go:89] found id: ""
	I1122 00:19:59.061752  218693 logs.go:282] 0 containers: []
	W1122 00:19:59.061763  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:19:59.061771  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:19:59.061831  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:19:59.089141  218693 cri.go:89] found id: ""
	I1122 00:19:59.089164  218693 logs.go:282] 0 containers: []
	W1122 00:19:59.089174  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:19:59.089185  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:19:59.089198  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:19:59.186417  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:19:59.186452  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:19:59.201060  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:19:59.201095  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:19:59.264254  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:19:59.264297  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:19:59.264313  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:19:59.303605  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:19:59.303643  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:19:59.358382  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:19:59.358425  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:19:59.398629  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:19:59.398669  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:19:59.449463  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:19:59.449505  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:19:59.487365  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:19:59.487403  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:19:59.526046  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:19:59.526080  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:19:59.562812  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:19:59.562843  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:19:59.594191  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:19:59.594230  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:02.129372  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:02.129923  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:02.130004  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:02.130071  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:02.161455  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:02.161484  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:02.161490  218693 cri.go:89] found id: ""
	I1122 00:20:02.161501  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:02.161563  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.165824  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.170451  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:02.170522  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:19:58.029853  251199 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-781232" context rescaled to 1 replicas
	W1122 00:19:59.529847  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:01.530493  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:00.520224  247021 node_ready.go:57] node "old-k8s-version-462319" has "Ready":"False" status (will retry)
	I1122 00:20:01.019651  247021 node_ready.go:49] node "old-k8s-version-462319" is "Ready"
	I1122 00:20:01.019681  247021 node_ready.go:38] duration metric: took 14.003330086s for node "old-k8s-version-462319" to be "Ready" ...
	I1122 00:20:01.019696  247021 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:01.019743  247021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:01.032926  247021 api_server.go:72] duration metric: took 14.481952557s to wait for apiserver process to appear ...
	I1122 00:20:01.032954  247021 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:01.032973  247021 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:20:01.039899  247021 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1122 00:20:01.041146  247021 api_server.go:141] control plane version: v1.28.0
	I1122 00:20:01.041172  247021 api_server.go:131] duration metric: took 8.212119ms to wait for apiserver health ...
	I1122 00:20:01.041191  247021 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:20:01.044815  247021 system_pods.go:59] 8 kube-system pods found
	I1122 00:20:01.044853  247021 system_pods.go:61] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.044862  247021 system_pods.go:61] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.044874  247021 system_pods.go:61] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.044879  247021 system_pods.go:61] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.044888  247021 system_pods.go:61] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.044897  247021 system_pods.go:61] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.044901  247021 system_pods.go:61] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.044909  247021 system_pods.go:61] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.044918  247021 system_pods.go:74] duration metric: took 3.718269ms to wait for pod list to return data ...
	I1122 00:20:01.044929  247021 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:20:01.047150  247021 default_sa.go:45] found service account: "default"
	I1122 00:20:01.047173  247021 default_sa.go:55] duration metric: took 2.236156ms for default service account to be created ...
	I1122 00:20:01.047182  247021 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:20:01.050474  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.050506  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.050514  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.050523  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.050528  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.050533  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.050539  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.050544  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.050551  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.050577  247021 retry.go:31] will retry after 205.575764ms: missing components: kube-dns
	I1122 00:20:01.261814  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.261847  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.261859  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.261865  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.261869  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.261873  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.261877  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.261879  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.261884  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.261900  247021 retry.go:31] will retry after 236.21482ms: missing components: kube-dns
	I1122 00:20:01.502877  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.502913  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.502921  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.502929  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.502935  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.502952  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.502957  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.502962  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.502984  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.503005  247021 retry.go:31] will retry after 442.873739ms: missing components: kube-dns
	I1122 00:20:01.950449  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.950483  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.950492  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.950500  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.950505  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.950516  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.950521  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.950526  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.950530  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Running
	I1122 00:20:01.950541  247021 system_pods.go:126] duration metric: took 903.352039ms to wait for k8s-apps to be running ...
	I1122 00:20:01.950553  247021 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:20:01.950602  247021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:01.964580  247021 system_svc.go:56] duration metric: took 14.015441ms WaitForService to wait for kubelet
	I1122 00:20:01.964612  247021 kubeadm.go:587] duration metric: took 15.413644993s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:01.964634  247021 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:20:01.968157  247021 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:20:01.968185  247021 node_conditions.go:123] node cpu capacity is 8
	I1122 00:20:01.968205  247021 node_conditions.go:105] duration metric: took 3.565831ms to run NodePressure ...
	I1122 00:20:01.968227  247021 start.go:242] waiting for startup goroutines ...
	I1122 00:20:01.968237  247021 start.go:247] waiting for cluster config update ...
	I1122 00:20:01.968254  247021 start.go:256] writing updated cluster config ...
	I1122 00:20:01.968545  247021 ssh_runner.go:195] Run: rm -f paused
	I1122 00:20:01.972712  247021 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:01.976920  247021 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pqbfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.983354  247021 pod_ready.go:94] pod "coredns-5dd5756b68-pqbfp" is "Ready"
	I1122 00:20:02.983385  247021 pod_ready.go:86] duration metric: took 1.00643947s for pod "coredns-5dd5756b68-pqbfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.987209  247021 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.992024  247021 pod_ready.go:94] pod "etcd-old-k8s-version-462319" is "Ready"
	I1122 00:20:02.992053  247021 pod_ready.go:86] duration metric: took 4.821819ms for pod "etcd-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.994875  247021 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.998765  247021 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-462319" is "Ready"
	I1122 00:20:02.998789  247021 pod_ready.go:86] duration metric: took 3.892836ms for pod "kube-apiserver-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.001798  247021 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.181579  247021 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-462319" is "Ready"
	I1122 00:20:03.181611  247021 pod_ready.go:86] duration metric: took 179.791243ms for pod "kube-controller-manager-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.381883  247021 pod_ready.go:83] waiting for pod "kube-proxy-kqrng" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.781562  247021 pod_ready.go:94] pod "kube-proxy-kqrng" is "Ready"
	I1122 00:20:03.781594  247021 pod_ready.go:86] duration metric: took 399.684082ms for pod "kube-proxy-kqrng" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.981736  247021 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:04.381559  247021 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-462319" is "Ready"
	I1122 00:20:04.381590  247021 pod_ready.go:86] duration metric: took 399.825883ms for pod "kube-scheduler-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:04.381604  247021 pod_ready.go:40] duration metric: took 2.408861294s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:04.431804  247021 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1122 00:20:04.435233  247021 out.go:203] 
	W1122 00:20:04.436473  247021 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1122 00:20:04.437863  247021 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1122 00:20:04.439555  247021 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-462319" cluster and "default" namespace by default
	I1122 00:20:01.711315  260527 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:20:01.711555  260527 start.go:159] libmachine.API.Create for "embed-certs-491677" (driver="docker")
	I1122 00:20:01.711610  260527 client.go:173] LocalClient.Create starting
	I1122 00:20:01.711685  260527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem
	I1122 00:20:01.711719  260527 main.go:143] libmachine: Decoding PEM data...
	I1122 00:20:01.711737  260527 main.go:143] libmachine: Parsing certificate...
	I1122 00:20:01.711816  260527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem
	I1122 00:20:01.711837  260527 main.go:143] libmachine: Decoding PEM data...
	I1122 00:20:01.711846  260527 main.go:143] libmachine: Parsing certificate...
	I1122 00:20:01.712184  260527 cli_runner.go:164] Run: docker network inspect embed-certs-491677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:20:01.730686  260527 cli_runner.go:211] docker network inspect embed-certs-491677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:20:01.730752  260527 network_create.go:284] running [docker network inspect embed-certs-491677] to gather additional debugging logs...
	I1122 00:20:01.730771  260527 cli_runner.go:164] Run: docker network inspect embed-certs-491677
	W1122 00:20:01.749708  260527 cli_runner.go:211] docker network inspect embed-certs-491677 returned with exit code 1
	I1122 00:20:01.749739  260527 network_create.go:287] error running [docker network inspect embed-certs-491677]: docker network inspect embed-certs-491677: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-491677 not found
	I1122 00:20:01.749755  260527 network_create.go:289] output of [docker network inspect embed-certs-491677]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-491677 not found
	
	** /stderr **
	I1122 00:20:01.749902  260527 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:20:01.769006  260527 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1df6c22ede91 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:c7:f4:a5:24:54} reservation:<nil>}
	I1122 00:20:01.769731  260527 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7d48551462a8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:3b:0e:74:ee:57} reservation:<nil>}
	I1122 00:20:01.770416  260527 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c50004b7f5b6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:73:1e:0d:b7:11} reservation:<nil>}
	I1122 00:20:01.771113  260527 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-166d2f324fb5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:da:99:1e:87:6f} reservation:<nil>}
	I1122 00:20:01.771891  260527 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ebca10}
	I1122 00:20:01.771919  260527 network_create.go:124] attempt to create docker network embed-certs-491677 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1122 00:20:01.771970  260527 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-491677 embed-certs-491677
	I1122 00:20:01.823460  260527 network_create.go:108] docker network embed-certs-491677 192.168.85.0/24 created
	I1122 00:20:01.823495  260527 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-491677" container
	I1122 00:20:01.823677  260527 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:20:01.843300  260527 cli_runner.go:164] Run: docker volume create embed-certs-491677 --label name.minikube.sigs.k8s.io=embed-certs-491677 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:20:01.863723  260527 oci.go:103] Successfully created a docker volume embed-certs-491677
	I1122 00:20:01.863797  260527 cli_runner.go:164] Run: docker run --rm --name embed-certs-491677-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-491677 --entrypoint /usr/bin/test -v embed-certs-491677:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:20:02.270865  260527 oci.go:107] Successfully prepared a docker volume embed-certs-491677
	I1122 00:20:02.270965  260527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:02.270986  260527 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:20:02.271058  260527 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-491677:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:20:02.204729  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:02.204756  218693 cri.go:89] found id: ""
	I1122 00:20:02.204766  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:02.204829  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.209535  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:02.209603  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:02.247383  218693 cri.go:89] found id: ""
	I1122 00:20:02.247408  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.247416  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:02.247422  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:02.247484  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:02.277440  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:02.277466  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:02.277473  218693 cri.go:89] found id: ""
	I1122 00:20:02.277483  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:02.277545  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.282049  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.286514  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:02.286581  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:02.316706  218693 cri.go:89] found id: ""
	I1122 00:20:02.316733  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.316744  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:02.316753  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:02.316813  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:02.347451  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:02.347471  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:02.347476  218693 cri.go:89] found id: ""
	I1122 00:20:02.347486  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:02.347542  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.352378  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.356502  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:02.356561  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:02.384778  218693 cri.go:89] found id: ""
	I1122 00:20:02.384802  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.384814  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:02.384825  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:02.384887  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:02.421102  218693 cri.go:89] found id: ""
	I1122 00:20:02.421131  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.421143  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:02.421156  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:02.421171  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:02.477880  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:02.477924  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:02.574856  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:02.574892  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:02.641120  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:02.641142  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:02.641154  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:02.681648  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:02.681686  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:02.739093  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:02.739128  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:02.774358  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:02.774395  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:02.810272  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:02.810310  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:02.842900  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:02.842942  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:02.857743  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:02.857784  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:02.894229  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:02.894272  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:02.929523  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:02.929555  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:05.459958  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:05.460532  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:05.460597  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:05.460676  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:05.488636  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:05.488658  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:05.488662  218693 cri.go:89] found id: ""
	I1122 00:20:05.488670  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:05.488715  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.492971  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.496804  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:05.496876  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:05.524856  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:05.524883  218693 cri.go:89] found id: ""
	I1122 00:20:05.524902  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:05.524962  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.529434  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:05.529521  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:05.557780  218693 cri.go:89] found id: ""
	I1122 00:20:05.557805  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.557819  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:05.557828  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:05.557885  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:05.586142  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:05.586166  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:05.586173  218693 cri.go:89] found id: ""
	I1122 00:20:05.586184  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:05.586248  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.590458  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.594671  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:05.594752  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:05.623542  218693 cri.go:89] found id: ""
	I1122 00:20:05.623565  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.623575  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:05.623585  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:05.623653  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:05.651642  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:05.651663  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:05.651666  218693 cri.go:89] found id: ""
	I1122 00:20:05.651674  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:05.651724  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.655785  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.659668  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:05.659743  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:05.687725  218693 cri.go:89] found id: ""
	I1122 00:20:05.687748  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.687756  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:05.687762  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:05.687810  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:05.714403  218693 cri.go:89] found id: ""
	I1122 00:20:05.714432  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.714444  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:05.714457  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:05.714472  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:05.748851  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:05.748901  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:05.784862  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:05.784899  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:05.813532  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:05.813569  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:05.844930  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:05.844965  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:05.897273  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:05.897337  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:05.935381  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:05.935417  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:06.025566  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:06.025612  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:06.040810  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:06.040843  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:06.102006  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:06.102032  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:06.102050  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:06.136887  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:06.136937  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:06.192634  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:06.192674  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	W1122 00:20:04.029159  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:06.067087  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	I1122 00:20:06.722373  260527 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-491677:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.451238931s)
	I1122 00:20:06.722412  260527 kic.go:203] duration metric: took 4.451422839s to extract preloaded images to volume ...
	W1122 00:20:06.722533  260527 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:20:06.722570  260527 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:20:06.722615  260527 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:20:06.782296  260527 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-491677 --name embed-certs-491677 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-491677 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-491677 --network embed-certs-491677 --ip 192.168.85.2 --volume embed-certs-491677:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:20:07.109552  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Running}}
	I1122 00:20:07.129178  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Status}}
	I1122 00:20:07.148399  260527 cli_runner.go:164] Run: docker exec embed-certs-491677 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:20:07.196229  260527 oci.go:144] the created container "embed-certs-491677" has a running status.
	I1122 00:20:07.196362  260527 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa...
	I1122 00:20:07.257446  260527 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:20:07.289218  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Status}}
	I1122 00:20:07.310559  260527 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:20:07.310578  260527 kic_runner.go:114] Args: [docker exec --privileged embed-certs-491677 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:20:07.351585  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Status}}
	I1122 00:20:07.374469  260527 machine.go:94] provisionDockerMachine start ...
	I1122 00:20:07.374754  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:07.397641  260527 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:07.397885  260527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:20:07.397902  260527 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:20:07.398578  260527 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36770->127.0.0.1:33073: read: connection reset by peer
	I1122 00:20:10.523553  260527 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-491677
	
	I1122 00:20:10.523587  260527 ubuntu.go:182] provisioning hostname "embed-certs-491677"
	I1122 00:20:10.523652  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:10.544251  260527 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:10.544519  260527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:20:10.544536  260527 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-491677 && echo "embed-certs-491677" | sudo tee /etc/hostname
	I1122 00:20:10.679747  260527 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-491677
	
	I1122 00:20:10.679832  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:10.700586  260527 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:10.700833  260527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:20:10.700858  260527 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-491677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-491677/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-491677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:20:10.825289  260527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:20:10.825326  260527 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9059/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9059/.minikube}
	I1122 00:20:10.825375  260527 ubuntu.go:190] setting up certificates
	I1122 00:20:10.825411  260527 provision.go:84] configureAuth start
	I1122 00:20:10.825489  260527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-491677
	I1122 00:20:10.844220  260527 provision.go:143] copyHostCerts
	I1122 00:20:10.844298  260527 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem, removing ...
	I1122 00:20:10.844307  260527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem
	I1122 00:20:10.844403  260527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem (1082 bytes)
	I1122 00:20:10.844496  260527 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem, removing ...
	I1122 00:20:10.844506  260527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem
	I1122 00:20:10.844532  260527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem (1123 bytes)
	I1122 00:20:10.844590  260527 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem, removing ...
	I1122 00:20:10.844598  260527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem
	I1122 00:20:10.844620  260527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem (1679 bytes)
	I1122 00:20:10.844669  260527 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem org=jenkins.embed-certs-491677 san=[127.0.0.1 192.168.85.2 embed-certs-491677 localhost minikube]
	I1122 00:20:10.881095  260527 provision.go:177] copyRemoteCerts
	I1122 00:20:10.881150  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:20:10.881198  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:10.899974  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:10.993091  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:20:11.014763  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1122 00:20:11.034702  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:20:11.053678  260527 provision.go:87] duration metric: took 228.246896ms to configureAuth
	I1122 00:20:11.053708  260527 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:20:11.053892  260527 config.go:182] Loaded profile config "embed-certs-491677": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:11.053909  260527 machine.go:97] duration metric: took 3.67941396s to provisionDockerMachine
	I1122 00:20:11.053917  260527 client.go:176] duration metric: took 9.342299036s to LocalClient.Create
	I1122 00:20:11.053943  260527 start.go:167] duration metric: took 9.342388491s to libmachine.API.Create "embed-certs-491677"
	I1122 00:20:11.053956  260527 start.go:293] postStartSetup for "embed-certs-491677" (driver="docker")
	I1122 00:20:11.053984  260527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:20:11.054052  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:20:11.054103  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.073167  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:11.168158  260527 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:20:11.172076  260527 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:20:11.172422  260527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:20:11.172459  260527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/addons for local assets ...
	I1122 00:20:11.172556  260527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/files for local assets ...
	I1122 00:20:11.172675  260527 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem -> 145302.pem in /etc/ssl/certs
	I1122 00:20:11.172811  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:20:11.182207  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:20:11.203784  260527 start.go:296] duration metric: took 149.811059ms for postStartSetup
	I1122 00:20:11.204173  260527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-491677
	I1122 00:20:11.222954  260527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/config.json ...
	I1122 00:20:11.223305  260527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:20:11.223354  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.242018  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:11.333726  260527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:20:11.338527  260527 start.go:128] duration metric: took 9.62936097s to createHost
	I1122 00:20:11.338558  260527 start.go:83] releasing machines lock for "embed-certs-491677", held for 9.629502399s
	I1122 00:20:11.338631  260527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-491677
	I1122 00:20:11.357563  260527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:20:11.357634  260527 ssh_runner.go:195] Run: cat /version.json
	I1122 00:20:11.357684  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.357690  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.377098  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:11.378067  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:08.727161  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:08.727652  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:08.727710  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:08.727762  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:08.754498  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:08.754522  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:08.754527  218693 cri.go:89] found id: ""
	I1122 00:20:08.754535  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:08.754583  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.758867  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.762449  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:08.762501  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:08.788422  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:08.788444  218693 cri.go:89] found id: ""
	I1122 00:20:08.788455  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:08.788512  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.792603  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:08.792668  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:08.820677  218693 cri.go:89] found id: ""
	I1122 00:20:08.820703  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.820711  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:08.820717  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:08.820769  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:08.848396  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:08.848418  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:08.848422  218693 cri.go:89] found id: ""
	I1122 00:20:08.848429  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:08.848485  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.852633  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.856393  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:08.856469  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:08.884423  218693 cri.go:89] found id: ""
	I1122 00:20:08.884454  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.884467  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:08.884476  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:08.884529  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:08.911898  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:08.911917  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:08.911921  218693 cri.go:89] found id: ""
	I1122 00:20:08.911928  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:08.912000  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.916097  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.919808  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:08.919868  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:08.945704  218693 cri.go:89] found id: ""
	I1122 00:20:08.945731  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.945742  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:08.945750  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:08.945811  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:08.971599  218693 cri.go:89] found id: ""
	I1122 00:20:08.971630  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.971642  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:08.971658  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:08.971686  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:08.985779  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:08.985806  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:09.018373  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:09.018407  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:09.055328  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:09.055359  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:09.098567  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:09.098608  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:09.183392  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:09.183433  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:09.242636  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:09.242654  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:09.242666  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:09.276133  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:09.276179  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:09.310731  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:09.310769  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:09.362187  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:09.362226  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:09.391737  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:09.391763  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:09.425753  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:09.425787  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:11.959328  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:11.959805  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:11.959868  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:11.959935  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:11.993113  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:11.993137  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:11.993143  218693 cri.go:89] found id: ""
	I1122 00:20:11.993153  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:11.993213  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:11.997946  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.002616  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:12.002741  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:12.040113  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:12.040150  218693 cri.go:89] found id: ""
	I1122 00:20:12.040160  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:12.040220  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.045665  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:12.045732  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:12.081343  218693 cri.go:89] found id: ""
	I1122 00:20:12.081375  218693 logs.go:282] 0 containers: []
	W1122 00:20:12.081384  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:12.081389  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:12.081449  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:12.116486  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:12.117024  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:12.117045  218693 cri.go:89] found id: ""
	I1122 00:20:12.117055  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:12.117115  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.121469  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.125453  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:12.125520  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:12.159076  218693 cri.go:89] found id: ""
	I1122 00:20:12.159108  218693 logs.go:282] 0 containers: []
	W1122 00:20:12.159121  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:12.159130  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:12.159191  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:11.523900  260527 ssh_runner.go:195] Run: systemctl --version
	I1122 00:20:11.531084  260527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:20:11.536010  260527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:20:11.536130  260527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:20:11.563766  260527 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:20:11.563792  260527 start.go:496] detecting cgroup driver to use...
	I1122 00:20:11.563830  260527 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:20:11.563873  260527 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:20:11.579543  260527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:20:11.593598  260527 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:20:11.593666  260527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:20:11.610889  260527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:20:11.629723  260527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:20:11.730670  260527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:20:11.819921  260527 docker.go:234] disabling docker service ...
	I1122 00:20:11.819985  260527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:20:11.839159  260527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:20:11.854142  260527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:20:11.943699  260527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:20:12.053855  260527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:20:12.073171  260527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:20:12.089999  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:20:12.105012  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:20:12.117591  260527 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1122 00:20:12.117652  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1122 00:20:12.128817  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:20:12.142147  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:20:12.154635  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:20:12.169029  260527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:20:12.181631  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:20:12.194568  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:20:12.207294  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:20:12.218684  260527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:20:12.228679  260527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:20:12.241707  260527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:20:12.337447  260527 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:20:12.443801  260527 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:20:12.443870  260527 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:20:12.448114  260527 start.go:564] Will wait 60s for crictl version
	I1122 00:20:12.448178  260527 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.452113  260527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:20:12.481619  260527 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:20:12.481687  260527 ssh_runner.go:195] Run: containerd --version
	I1122 00:20:12.506954  260527 ssh_runner.go:195] Run: containerd --version
	I1122 00:20:12.537127  260527 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1122 00:20:08.528688  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:10.529626  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	I1122 00:20:12.029744  251199 node_ready.go:49] node "no-preload-781232" is "Ready"
	I1122 00:20:12.029782  251199 node_ready.go:38] duration metric: took 14.503754974s for node "no-preload-781232" to be "Ready" ...
	I1122 00:20:12.029799  251199 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:12.029867  251199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:12.049755  251199 api_server.go:72] duration metric: took 14.826557708s to wait for apiserver process to appear ...
	I1122 00:20:12.049782  251199 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:12.049803  251199 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1122 00:20:12.055733  251199 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1122 00:20:12.057374  251199 api_server.go:141] control plane version: v1.34.1
	I1122 00:20:12.057405  251199 api_server.go:131] duration metric: took 7.61544ms to wait for apiserver health ...
	I1122 00:20:12.057416  251199 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:20:12.062154  251199 system_pods.go:59] 8 kube-system pods found
	I1122 00:20:12.062190  251199 system_pods.go:61] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:12.062199  251199 system_pods.go:61] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.062207  251199 system_pods.go:61] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.062212  251199 system_pods.go:61] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.062218  251199 system_pods.go:61] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.062223  251199 system_pods.go:61] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.062228  251199 system_pods.go:61] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.062237  251199 system_pods.go:61] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:12.062245  251199 system_pods.go:74] duration metric: took 4.821603ms to wait for pod list to return data ...
	I1122 00:20:12.062254  251199 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:20:12.065112  251199 default_sa.go:45] found service account: "default"
	I1122 00:20:12.065138  251199 default_sa.go:55] duration metric: took 2.848928ms for default service account to be created ...
	I1122 00:20:12.065149  251199 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:20:12.069582  251199 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:12.069625  251199 system_pods.go:89] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:12.069633  251199 system_pods.go:89] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.069648  251199 system_pods.go:89] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.069655  251199 system_pods.go:89] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.069661  251199 system_pods.go:89] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.069666  251199 system_pods.go:89] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.069670  251199 system_pods.go:89] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.069676  251199 system_pods.go:89] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:12.069728  251199 retry.go:31] will retry after 227.269849ms: missing components: kube-dns
	I1122 00:20:12.301834  251199 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:12.301869  251199 system_pods.go:89] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:12.301877  251199 system_pods.go:89] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.301886  251199 system_pods.go:89] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.301892  251199 system_pods.go:89] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.301898  251199 system_pods.go:89] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.301903  251199 system_pods.go:89] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.301910  251199 system_pods.go:89] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.301917  251199 system_pods.go:89] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:12.301938  251199 retry.go:31] will retry after 387.887736ms: missing components: kube-dns
	I1122 00:20:12.694992  251199 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:12.695026  251199 system_pods.go:89] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Running
	I1122 00:20:12.695035  251199 system_pods.go:89] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.695041  251199 system_pods.go:89] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.695047  251199 system_pods.go:89] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.695052  251199 system_pods.go:89] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.695060  251199 system_pods.go:89] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.695065  251199 system_pods.go:89] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.695070  251199 system_pods.go:89] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Running
	I1122 00:20:12.695080  251199 system_pods.go:126] duration metric: took 629.924123ms to wait for k8s-apps to be running ...
	I1122 00:20:12.695093  251199 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:20:12.695144  251199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:12.708823  251199 system_svc.go:56] duration metric: took 13.721013ms WaitForService to wait for kubelet
	I1122 00:20:12.708855  251199 kubeadm.go:587] duration metric: took 15.485663176s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:12.708874  251199 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:20:12.712345  251199 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:20:12.712376  251199 node_conditions.go:123] node cpu capacity is 8
	I1122 00:20:12.712396  251199 node_conditions.go:105] duration metric: took 3.516354ms to run NodePressure ...
	I1122 00:20:12.712412  251199 start.go:242] waiting for startup goroutines ...
	I1122 00:20:12.712423  251199 start.go:247] waiting for cluster config update ...
	I1122 00:20:12.712441  251199 start.go:256] writing updated cluster config ...
	I1122 00:20:12.712733  251199 ssh_runner.go:195] Run: rm -f paused
	I1122 00:20:12.717390  251199 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:12.721696  251199 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9wcct" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.726947  251199 pod_ready.go:94] pod "coredns-66bc5c9577-9wcct" is "Ready"
	I1122 00:20:12.726976  251199 pod_ready.go:86] duration metric: took 5.255643ms for pod "coredns-66bc5c9577-9wcct" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.729559  251199 pod_ready.go:83] waiting for pod "etcd-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.734425  251199 pod_ready.go:94] pod "etcd-no-preload-781232" is "Ready"
	I1122 00:20:12.734455  251199 pod_ready.go:86] duration metric: took 4.86467ms for pod "etcd-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.736916  251199 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.741485  251199 pod_ready.go:94] pod "kube-apiserver-no-preload-781232" is "Ready"
	I1122 00:20:12.741515  251199 pod_ready.go:86] duration metric: took 4.574913ms for pod "kube-apiserver-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.743848  251199 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	d01de905a2d07       56cc512116c8f       6 seconds ago       Running             busybox                   0                   e511b813570c1       busybox                                          default
	f7527a8afc668       ead0a4a53df89       13 seconds ago      Running             coredns                   0                   b00fa05a6c375       coredns-5dd5756b68-pqbfp                         kube-system
	f2a1ec178c227       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   a3bbedf747991       storage-provisioner                              kube-system
	abad042f2a4ad       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   721fcd34a44d6       kindnet-ldtd8                                    kube-system
	5119ee9a69fb3       ea1030da44aa1       28 seconds ago      Running             kube-proxy                0                   be780c30602ce       kube-proxy-kqrng                                 kube-system
	4c35680ab2dd6       73deb9a3f7025       47 seconds ago      Running             etcd                      0                   adbbfe9941b27       etcd-old-k8s-version-462319                      kube-system
	1863b35aae093       f6f496300a2ae       47 seconds ago      Running             kube-scheduler            0                   45afb7772f575       kube-scheduler-old-k8s-version-462319            kube-system
	e398c42ad8188       bb5e0dde9054c       47 seconds ago      Running             kube-apiserver            0                   0ce7c78109ce7       kube-apiserver-old-k8s-version-462319            kube-system
	355ecffe75a3f       4be79c38a4bab       47 seconds ago      Running             kube-controller-manager   0                   5dfd6ffd80d1f       kube-controller-manager-old-k8s-version-462319   kube-system
	
	
	==> containerd <==
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.327237013Z" level=info msg="connecting to shim f2a1ec178c227617bd32e678c94e3d44e606683f0b10ccdbc182dec6d6d5c9e9" address="unix:///run/containerd/s/62835cccd20d8437bb636df9ea457fe2506fdd9387d47f5e31a45c75f852a444" protocol=ttrpc version=3
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.328631129Z" level=info msg="CreateContainer within sandbox \"b00fa05a6c375cb07b56b89e739f90401ad7f950dedcb886ca1774eba46a4293\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.336790890Z" level=info msg="Container f7527a8afc6683a9935b781bf3006cc9c368a534f3eafba3501b6509659a437b: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.343474448Z" level=info msg="CreateContainer within sandbox \"b00fa05a6c375cb07b56b89e739f90401ad7f950dedcb886ca1774eba46a4293\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7527a8afc6683a9935b781bf3006cc9c368a534f3eafba3501b6509659a437b\""
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.344107519Z" level=info msg="StartContainer for \"f7527a8afc6683a9935b781bf3006cc9c368a534f3eafba3501b6509659a437b\""
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.345166179Z" level=info msg="connecting to shim f7527a8afc6683a9935b781bf3006cc9c368a534f3eafba3501b6509659a437b" address="unix:///run/containerd/s/39593751a6c9fe87428291df6153bccdab6c22a754601ae94cfc40e697ece6ec" protocol=ttrpc version=3
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.389133316Z" level=info msg="StartContainer for \"f2a1ec178c227617bd32e678c94e3d44e606683f0b10ccdbc182dec6d6d5c9e9\" returns successfully"
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.404040136Z" level=info msg="StartContainer for \"f7527a8afc6683a9935b781bf3006cc9c368a534f3eafba3501b6509659a437b\" returns successfully"
	Nov 22 00:20:05 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:05.083706178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:89dd9411-148d-4a8e-98d3-a51a8eab9d35,Namespace:default,Attempt:0,}"
	Nov 22 00:20:05 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:05.877683353Z" level=info msg="connecting to shim e511b813570c19e1d5c5c2002304caba5cc1bac5847092a53135ba9cb1b1dd7c" address="unix:///run/containerd/s/b045fc79abfabe20fc9affb730c643e7c442531994f349b7904cd7f34ab0272a" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:20:06 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:06.066243350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:89dd9411-148d-4a8e-98d3-a51a8eab9d35,Namespace:default,Attempt:0,} returns sandbox id \"e511b813570c19e1d5c5c2002304caba5cc1bac5847092a53135ba9cb1b1dd7c\""
	Nov 22 00:20:06 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:06.068244404Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.300595484Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.301398927Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396644"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.302750252Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.304853958Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.305213907Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.236905893s"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.305247082Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.306892429Z" level=info msg="CreateContainer within sandbox \"e511b813570c19e1d5c5c2002304caba5cc1bac5847092a53135ba9cb1b1dd7c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.314973197Z" level=info msg="Container d01de905a2d0700ad9691d5a73cf41f69bb587ec67e218858862ae31fcd53485: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.321465429Z" level=info msg="CreateContainer within sandbox \"e511b813570c19e1d5c5c2002304caba5cc1bac5847092a53135ba9cb1b1dd7c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"d01de905a2d0700ad9691d5a73cf41f69bb587ec67e218858862ae31fcd53485\""
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.322134703Z" level=info msg="StartContainer for \"d01de905a2d0700ad9691d5a73cf41f69bb587ec67e218858862ae31fcd53485\""
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.323141205Z" level=info msg="connecting to shim d01de905a2d0700ad9691d5a73cf41f69bb587ec67e218858862ae31fcd53485" address="unix:///run/containerd/s/b045fc79abfabe20fc9affb730c643e7c442531994f349b7904cd7f34ab0272a" protocol=ttrpc version=3
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.376916692Z" level=info msg="StartContainer for \"d01de905a2d0700ad9691d5a73cf41f69bb587ec67e218858862ae31fcd53485\" returns successfully"
	Nov 22 00:20:13 old-k8s-version-462319 containerd[666]: E1122 00:20:13.803924     666 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [f7527a8afc6683a9935b781bf3006cc9c368a534f3eafba3501b6509659a437b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60216 - 50495 "HINFO IN 8122801349455611517.3511563579879947437. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074291599s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-462319
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-462319
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=old-k8s-version-462319
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_19_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:19:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-462319
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:20:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:20:04 +0000   Sat, 22 Nov 2025 00:19:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:20:04 +0000   Sat, 22 Nov 2025 00:19:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:20:04 +0000   Sat, 22 Nov 2025 00:19:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:20:04 +0000   Sat, 22 Nov 2025 00:20:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-462319
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                1a763c28-0497-45f3-b9e8-458b8b4eb589
	  Boot ID:                    725aae03-f893-4e0b-b029-cbd3b00ccfdd
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-pqbfp                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-462319                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-ldtd8                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-462319             250m (3%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-462319    200m (2%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-kqrng                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-462319             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 42s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  42s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s   kubelet          Node old-k8s-version-462319 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s   kubelet          Node old-k8s-version-462319 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s   kubelet          Node old-k8s-version-462319 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-462319 event: Registered Node old-k8s-version-462319 in Controller
	  Normal  NodeReady                15s   kubelet          Node old-k8s-version-462319 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000865] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.410276] i8042: Warning: Keylock active
	[  +0.014947] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.495836] block sda: the capability attribute has been deprecated.
	[  +0.091740] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024333] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.452540] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [4c35680ab2dd6966de549749b29af9a5a8bccb172d03360ef57391e45ea9f885] <==
	{"level":"info","ts":"2025-11-22T00:19:28.060277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-11-22T00:19:28.060288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-22T00:19:28.061026Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:19:28.061614Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:19:28.061614Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-462319 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-22T00:19:28.061648Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:19:28.06183Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:19:28.0621Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:19:28.062388Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:19:28.062242Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-22T00:19:28.062743Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-22T00:19:28.064288Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-22T00:19:28.064366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-22T00:19:32.633697Z","caller":"traceutil/trace.go:171","msg":"trace[64928526] transaction","detail":"{read_only:false; response_revision:210; number_of_response:1; }","duration":"260.007025ms","start":"2025-11-22T00:19:32.373672Z","end":"2025-11-22T00:19:32.633679Z","steps":["trace[64928526] 'process raft request'  (duration: 259.898405ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:19:33.081079Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"335.74286ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790177431359743 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kube-system/bootstrap-token-vumgow\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kube-system/bootstrap-token-vumgow\" value_size:617 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:19:33.081182Z","caller":"traceutil/trace.go:171","msg":"trace[454440905] transaction","detail":"{read_only:false; response_revision:211; number_of_response:1; }","duration":"441.168552ms","start":"2025-11-22T00:19:32.639997Z","end":"2025-11-22T00:19:33.081166Z","steps":["trace[454440905] 'process raft request'  (duration: 104.950033ms)","trace[454440905] 'compare'  (duration: 335.635432ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:19:33.081293Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:19:32.63998Z","time spent":"441.252908ms","remote":"127.0.0.1:42828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/secrets/kube-system/bootstrap-token-vumgow\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kube-system/bootstrap-token-vumgow\" value_size:617 >> failure:<>"}
	{"level":"warn","ts":"2025-11-22T00:19:44.266771Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.299403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:19:44.266864Z","caller":"traceutil/trace.go:171","msg":"trace[842289003] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:0; response_revision:282; }","duration":"130.453771ms","start":"2025-11-22T00:19:44.136394Z","end":"2025-11-22T00:19:44.266847Z","steps":["trace[842289003] 'range keys from in-memory index tree'  (duration: 130.216573ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:19:44.386458Z","caller":"traceutil/trace.go:171","msg":"trace[490276607] linearizableReadLoop","detail":"{readStateIndex:296; appliedIndex:295; }","duration":"101.94453ms","start":"2025-11-22T00:19:44.284493Z","end":"2025-11-22T00:19:44.386437Z","steps":["trace[490276607] 'read index received'  (duration: 101.776407ms)","trace[490276607] 'applied index is now lower than readState.Index'  (duration: 167.67µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:19:44.386547Z","caller":"traceutil/trace.go:171","msg":"trace[1514742623] transaction","detail":"{read_only:false; response_revision:283; number_of_response:1; }","duration":"114.786396ms","start":"2025-11-22T00:19:44.271741Z","end":"2025-11-22T00:19:44.386527Z","steps":["trace[1514742623] 'process raft request'  (duration: 114.589176ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:19:44.386605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.121151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:19:44.386631Z","caller":"traceutil/trace.go:171","msg":"trace[800592602] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:283; }","duration":"102.163591ms","start":"2025-11-22T00:19:44.284459Z","end":"2025-11-22T00:19:44.386622Z","steps":["trace[800592602] 'agreement among raft nodes before linearized reading'  (duration: 102.059746ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:20:06.401485Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.691938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:20:06.401571Z","caller":"traceutil/trace.go:171","msg":"trace[919203119] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:425; }","duration":"116.801997ms","start":"2025-11-22T00:20:06.284749Z","end":"2025-11-22T00:20:06.401551Z","steps":["trace[919203119] 'range keys from in-memory index tree'  (duration: 116.607287ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:20:15 up  1:02,  0 user,  load average: 6.48, 3.76, 2.29
	Linux old-k8s-version-462319 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [abad042f2a4adf0bb5a1e42eb6090d0433dbd093e2502e0a0763cd88008fa485] <==
	I1122 00:19:50.358053       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:19:50.379516       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1122 00:19:50.379673       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:19:50.379699       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:19:50.379728       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:19:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:19:50.657926       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:19:50.657947       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:19:50.657972       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:19:50.658082       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:19:50.980378       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:19:50.980413       1 metrics.go:72] Registering metrics
	I1122 00:19:50.980477       1 controller.go:711] "Syncing nftables rules"
	I1122 00:20:00.663360       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1122 00:20:00.663424       1 main.go:301] handling current node
	I1122 00:20:10.657535       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1122 00:20:10.657598       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e398c42ad8188a2a96d101f089a0968d374f75b6827a154f004bd956b9155274] <==
	I1122 00:19:29.739253       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1122 00:19:29.739494       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:19:29.739756       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1122 00:19:29.739791       1 aggregator.go:166] initial CRD sync complete...
	I1122 00:19:29.739800       1 autoregister_controller.go:141] Starting autoregister controller
	I1122 00:19:29.739807       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:19:29.739814       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:19:29.740221       1 controller.go:624] quota admission added evaluator for: namespaces
	I1122 00:19:29.740304       1 shared_informer.go:318] Caches are synced for configmaps
	I1122 00:19:29.936021       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:19:30.645531       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:19:30.649522       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:19:30.649546       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:19:31.151928       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:19:31.192786       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:19:31.249628       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:19:31.255812       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1122 00:19:31.257056       1 controller.go:624] quota admission added evaluator for: endpoints
	I1122 00:19:31.261743       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:19:31.700612       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1122 00:19:33.349558       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1122 00:19:33.363593       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:19:33.376299       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1122 00:19:46.344730       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:19:46.397570       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [355ecffe75a3ff0874bfe775cd91a06b9bfff9f2dc65c709c3da1adca76e11c1] <==
	I1122 00:19:45.646325       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:19:45.687399       1 shared_informer.go:318] Caches are synced for disruption
	I1122 00:19:45.693911       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:19:46.009572       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:19:46.084787       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:19:46.084820       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1122 00:19:46.355549       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kqrng"
	I1122 00:19:46.357410       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ldtd8"
	I1122 00:19:46.402945       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1122 00:19:46.497513       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pqbfp"
	I1122 00:19:46.505494       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bjgv6"
	I1122 00:19:46.515365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.69029ms"
	I1122 00:19:46.537252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.812757ms"
	I1122 00:19:46.537541       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="177.843µs"
	I1122 00:19:47.048823       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1122 00:19:47.070179       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-bjgv6"
	I1122 00:19:47.078565       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.08623ms"
	I1122 00:19:47.085902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.261706ms"
	I1122 00:19:47.086048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.581µs"
	I1122 00:20:00.892386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.286µs"
	I1122 00:20:00.912888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.033µs"
	I1122 00:20:01.551233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.993µs"
	I1122 00:20:02.562092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.330757ms"
	I1122 00:20:02.562207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.9µs"
	I1122 00:20:05.541105       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [5119ee9a69fb309c6fe6c40bfdf7853c1d5fd0390280d45b28a695bd3259a0c0] <==
	I1122 00:19:47.043350       1 server_others.go:69] "Using iptables proxy"
	I1122 00:19:47.061630       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1122 00:19:47.101193       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:19:47.103704       1 server_others.go:152] "Using iptables Proxier"
	I1122 00:19:47.103745       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1122 00:19:47.103755       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1122 00:19:47.103806       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1122 00:19:47.104104       1 server.go:846] "Version info" version="v1.28.0"
	I1122 00:19:47.104124       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:19:47.104828       1 config.go:188] "Starting service config controller"
	I1122 00:19:47.104867       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1122 00:19:47.104926       1 config.go:97] "Starting endpoint slice config controller"
	I1122 00:19:47.104932       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1122 00:19:47.105174       1 config.go:315] "Starting node config controller"
	I1122 00:19:47.105210       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1122 00:19:47.205514       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1122 00:19:47.205516       1 shared_informer.go:318] Caches are synced for service config
	I1122 00:19:47.205561       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [1863b35aae093f7c8f897de1e1301f7582ed68975578bf5d2f19a845b5bbb715] <==
	W1122 00:19:29.717451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1122 00:19:29.717478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1122 00:19:29.717458       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1122 00:19:29.717515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1122 00:19:29.717553       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1122 00:19:29.717616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1122 00:19:29.717652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1122 00:19:29.717675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1122 00:19:30.562109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1122 00:19:30.562139       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1122 00:19:30.586044       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1122 00:19:30.586087       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1122 00:19:30.770112       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1122 00:19:30.770162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1122 00:19:30.772555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1122 00:19:30.772599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1122 00:19:30.781374       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1122 00:19:30.781431       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:19:30.807504       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1122 00:19:30.807533       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1122 00:19:30.845180       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1122 00:19:30.845236       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1122 00:19:30.871051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1122 00:19:30.871090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1122 00:19:33.910375       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 22 00:19:45 old-k8s-version-462319 kubelet[1521]: I1122 00:19:45.613796    1521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.364926    1521 topology_manager.go:215] "Topology Admit Handler" podUID="643cd348-4af3-4720-af0d-e931f184742c" podNamespace="kube-system" podName="kube-proxy-kqrng"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.365817    1521 topology_manager.go:215] "Topology Admit Handler" podUID="6bf161d2-c442-466d-98b8-c313a127bf22" podNamespace="kube-system" podName="kindnet-ldtd8"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.396776    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-295rj\" (UniqueName: \"kubernetes.io/projected/643cd348-4af3-4720-af0d-e931f184742c-kube-api-access-295rj\") pod \"kube-proxy-kqrng\" (UID: \"643cd348-4af3-4720-af0d-e931f184742c\") " pod="kube-system/kube-proxy-kqrng"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.398874    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/643cd348-4af3-4720-af0d-e931f184742c-lib-modules\") pod \"kube-proxy-kqrng\" (UID: \"643cd348-4af3-4720-af0d-e931f184742c\") " pod="kube-system/kube-proxy-kqrng"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.398955    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6bf161d2-c442-466d-98b8-c313a127bf22-cni-cfg\") pod \"kindnet-ldtd8\" (UID: \"6bf161d2-c442-466d-98b8-c313a127bf22\") " pod="kube-system/kindnet-ldtd8"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.398980    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bf161d2-c442-466d-98b8-c313a127bf22-xtables-lock\") pod \"kindnet-ldtd8\" (UID: \"6bf161d2-c442-466d-98b8-c313a127bf22\") " pod="kube-system/kindnet-ldtd8"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.399025    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bf161d2-c442-466d-98b8-c313a127bf22-lib-modules\") pod \"kindnet-ldtd8\" (UID: \"6bf161d2-c442-466d-98b8-c313a127bf22\") " pod="kube-system/kindnet-ldtd8"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.399054    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/643cd348-4af3-4720-af0d-e931f184742c-kube-proxy\") pod \"kube-proxy-kqrng\" (UID: \"643cd348-4af3-4720-af0d-e931f184742c\") " pod="kube-system/kube-proxy-kqrng"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.399082    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/643cd348-4af3-4720-af0d-e931f184742c-xtables-lock\") pod \"kube-proxy-kqrng\" (UID: \"643cd348-4af3-4720-af0d-e931f184742c\") " pod="kube-system/kube-proxy-kqrng"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.399117    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwtxn\" (UniqueName: \"kubernetes.io/projected/6bf161d2-c442-466d-98b8-c313a127bf22-kube-api-access-xwtxn\") pod \"kindnet-ldtd8\" (UID: \"6bf161d2-c442-466d-98b8-c313a127bf22\") " pod="kube-system/kindnet-ldtd8"
	Nov 22 00:19:47 old-k8s-version-462319 kubelet[1521]: I1122 00:19:47.509109    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kqrng" podStartSLOduration=1.509057216 podCreationTimestamp="2025-11-22 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:47.50894671 +0000 UTC m=+14.188238544" watchObservedRunningTime="2025-11-22 00:19:47.509057216 +0000 UTC m=+14.188349048"
	Nov 22 00:19:50 old-k8s-version-462319 kubelet[1521]: I1122 00:19:50.516088    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-ldtd8" podStartSLOduration=1.666002271 podCreationTimestamp="2025-11-22 00:19:46 +0000 UTC" firstStartedPulling="2025-11-22 00:19:47.157978554 +0000 UTC m=+13.837270379" lastFinishedPulling="2025-11-22 00:19:50.007957975 +0000 UTC m=+16.687249802" observedRunningTime="2025-11-22 00:19:50.515675934 +0000 UTC m=+17.194967778" watchObservedRunningTime="2025-11-22 00:19:50.515981694 +0000 UTC m=+17.195273528"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.709466    1521 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.889924    1521 topology_manager.go:215] "Topology Admit Handler" podUID="fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2" podNamespace="kube-system" podName="storage-provisioner"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.892871    1521 topology_manager.go:215] "Topology Admit Handler" podUID="44750e8d-5eeb-4845-9029-a58cbf976b62" podNamespace="kube-system" podName="coredns-5dd5756b68-pqbfp"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.993531    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44750e8d-5eeb-4845-9029-a58cbf976b62-config-volume\") pod \"coredns-5dd5756b68-pqbfp\" (UID: \"44750e8d-5eeb-4845-9029-a58cbf976b62\") " pod="kube-system/coredns-5dd5756b68-pqbfp"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.993597    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2-tmp\") pod \"storage-provisioner\" (UID: \"fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2\") " pod="kube-system/storage-provisioner"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.993637    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfnhk\" (UniqueName: \"kubernetes.io/projected/44750e8d-5eeb-4845-9029-a58cbf976b62-kube-api-access-pfnhk\") pod \"coredns-5dd5756b68-pqbfp\" (UID: \"44750e8d-5eeb-4845-9029-a58cbf976b62\") " pod="kube-system/coredns-5dd5756b68-pqbfp"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.993669    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2fz\" (UniqueName: \"kubernetes.io/projected/fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2-kube-api-access-rj2fz\") pod \"storage-provisioner\" (UID: \"fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2\") " pod="kube-system/storage-provisioner"
	Nov 22 00:20:01 old-k8s-version-462319 kubelet[1521]: I1122 00:20:01.564512    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.564413938 podCreationTimestamp="2025-11-22 00:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:01.564333027 +0000 UTC m=+28.243624860" watchObservedRunningTime="2025-11-22 00:20:01.564413938 +0000 UTC m=+28.243705771"
	Nov 22 00:20:01 old-k8s-version-462319 kubelet[1521]: I1122 00:20:01.564659    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pqbfp" podStartSLOduration=15.564629833 podCreationTimestamp="2025-11-22 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:01.551555332 +0000 UTC m=+28.230847165" watchObservedRunningTime="2025-11-22 00:20:01.564629833 +0000 UTC m=+28.243921660"
	Nov 22 00:20:04 old-k8s-version-462319 kubelet[1521]: I1122 00:20:04.775067    1521 topology_manager.go:215] "Topology Admit Handler" podUID="89dd9411-148d-4a8e-98d3-a51a8eab9d35" podNamespace="default" podName="busybox"
	Nov 22 00:20:04 old-k8s-version-462319 kubelet[1521]: I1122 00:20:04.915405    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7gkx\" (UniqueName: \"kubernetes.io/projected/89dd9411-148d-4a8e-98d3-a51a8eab9d35-kube-api-access-l7gkx\") pod \"busybox\" (UID: \"89dd9411-148d-4a8e-98d3-a51a8eab9d35\") " pod="default/busybox"
	Nov 22 00:20:08 old-k8s-version-462319 kubelet[1521]: I1122 00:20:08.563800    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.326082204 podCreationTimestamp="2025-11-22 00:20:04 +0000 UTC" firstStartedPulling="2025-11-22 00:20:06.067901148 +0000 UTC m=+32.747192973" lastFinishedPulling="2025-11-22 00:20:08.305570732 +0000 UTC m=+34.984862556" observedRunningTime="2025-11-22 00:20:08.563606355 +0000 UTC m=+35.242898188" watchObservedRunningTime="2025-11-22 00:20:08.563751787 +0000 UTC m=+35.243043620"
	
	
	==> storage-provisioner [f2a1ec178c227617bd32e678c94e3d44e606683f0b10ccdbc182dec6d6d5c9e9] <==
	I1122 00:20:01.401220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:20:01.412796       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:20:01.412842       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1122 00:20:01.421489       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:20:01.421683       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-462319_fbf5718a-3981-4828-8660-7b6ddab898c0!
	I1122 00:20:01.421619       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8be93cf-82a7-4f20-a2ea-927b67416b8f", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-462319_fbf5718a-3981-4828-8660-7b6ddab898c0 became leader
	I1122 00:20:01.522750       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-462319_fbf5718a-3981-4828-8660-7b6ddab898c0!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-462319 -n old-k8s-version-462319
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-462319 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-462319
helpers_test.go:243: (dbg) docker inspect old-k8s-version-462319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66",
	        "Created": "2025-11-22T00:19:16.365495044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 248707,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:19:16.402958348Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66/hostname",
	        "HostsPath": "/var/lib/docker/containers/60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66/hosts",
	        "LogPath": "/var/lib/docker/containers/60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66/60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66-json.log",
	        "Name": "/old-k8s-version-462319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-462319:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-462319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "60eae3b63b81b346ead7547921d488153ed6b21604550a910dce24f5c18a0d66",
	                "LowerDir": "/var/lib/docker/overlay2/6ca06b58ff047715f101193d0f051e92ffb3bb47f4e9e98de16e3d4c7f58beb1-init/diff:/var/lib/docker/overlay2/4b4af9a4e857911a6b5096aeeaee227ee7577c6eff3b08bbb4e765c49ed2fb70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6ca06b58ff047715f101193d0f051e92ffb3bb47f4e9e98de16e3d4c7f58beb1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6ca06b58ff047715f101193d0f051e92ffb3bb47f4e9e98de16e3d4c7f58beb1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6ca06b58ff047715f101193d0f051e92ffb3bb47f4e9e98de16e3d4c7f58beb1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-462319",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-462319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-462319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-462319",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-462319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b6589169c31c78bfea6577019ea30ba0adadee1467810b9b1a0b1b8b4a97b9f5",
	            "SandboxKey": "/var/run/docker/netns/b6589169c31c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-462319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "08252eaaf7e532efc839aa6b0c4ce7bea14dc3e5057df8085e81eab6e1e46265",
	                    "EndpointID": "d132fdb6f6e769e175e9e69bd315da82881eb4351a6b66ae2fe24784dbabd3ac",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6e:0f:4c:be:16:ac",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-462319",
	                        "60eae3b63b81"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-462319 -n old-k8s-version-462319
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-462319 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-462319 logs -n 25: (1.108526213s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-687868 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo containerd config dump                                                                                                                                                                                                        │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo crio config                                                                                                                                                                                                                   │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ delete  │ -p cilium-687868                                                                                                                                                                                                                                    │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p old-k8s-version-462319 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ ssh     │ -p NoKubernetes-714059 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ start   │ -p cert-expiration-427330 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-427330 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ delete  │ -p cert-expiration-427330                                                                                                                                                                                                                           │ cert-expiration-427330 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p no-preload-781232 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ stop    │ -p NoKubernetes-714059                                                                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p NoKubernetes-714059 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ ssh     │ -p NoKubernetes-714059 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ delete  │ -p NoKubernetes-714059                                                                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ start   │ -p embed-certs-491677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-491677     │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:20:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:20:01.497017  260527 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:20:01.497324  260527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:01.497336  260527 out.go:374] Setting ErrFile to fd 2...
	I1122 00:20:01.497340  260527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:01.497588  260527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:20:01.498054  260527 out.go:368] Setting JSON to false
	I1122 00:20:01.499443  260527 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3740,"bootTime":1763767061,"procs":385,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:20:01.499503  260527 start.go:143] virtualization: kvm guest
	I1122 00:20:01.501458  260527 out.go:179] * [embed-certs-491677] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:20:01.503562  260527 notify.go:221] Checking for updates...
	I1122 00:20:01.503572  260527 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:20:01.505088  260527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:20:01.506758  260527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:20:01.508287  260527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1122 00:20:01.509699  260527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:20:01.511183  260527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:20:01.513382  260527 config.go:182] Loaded profile config "kubernetes-upgrade-882262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:01.513541  260527 config.go:182] Loaded profile config "no-preload-781232": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:01.513638  260527 config.go:182] Loaded profile config "old-k8s-version-462319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1122 00:20:01.513752  260527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:20:01.545401  260527 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:20:01.545504  260527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:01.611105  260527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-22 00:20:01.601298329 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:01.611234  260527 docker.go:319] overlay module found
	I1122 00:20:01.613226  260527 out.go:179] * Using the docker driver based on user configuration
	I1122 00:20:01.614649  260527 start.go:309] selected driver: docker
	I1122 00:20:01.614666  260527 start.go:930] validating driver "docker" against <nil>
	I1122 00:20:01.614677  260527 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:20:01.615350  260527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:01.674666  260527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:20:01.664354692 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:01.674876  260527 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:20:01.675176  260527 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:01.676975  260527 out.go:179] * Using Docker driver with root privileges
	I1122 00:20:01.678251  260527 cni.go:84] Creating CNI manager for ""
	I1122 00:20:01.678367  260527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:20:01.678383  260527 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:20:01.678481  260527 start.go:353] cluster config:
	{Name:embed-certs-491677 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:20:01.680036  260527 out.go:179] * Starting "embed-certs-491677" primary control-plane node in "embed-certs-491677" cluster
	I1122 00:20:01.683810  260527 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:20:01.685242  260527 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:20:01.686680  260527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:01.686729  260527 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1122 00:20:01.686743  260527 cache.go:65] Caching tarball of preloaded images
	I1122 00:20:01.686775  260527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:20:01.686916  260527 preload.go:238] Found /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 00:20:01.686942  260527 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1122 00:20:01.687116  260527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/config.json ...
	I1122 00:20:01.687148  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/config.json: {Name:mkf02d672882aad1c3b94e79745f8cf62e3f5b13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:01.708872  260527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:20:01.708897  260527 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:20:01.708914  260527 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:20:01.708943  260527 start.go:360] acquireMachinesLock for embed-certs-491677: {Name:mkbe59d49caffedca862a9ecb177d8d82196efdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:01.709044  260527 start.go:364] duration metric: took 84.98µs to acquireMachinesLock for "embed-certs-491677"
	I1122 00:20:01.709067  260527 start.go:93] Provisioning new machine with config: &{Name:embed-certs-491677 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:20:01.709131  260527 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:19:58.829298  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:19:58.829759  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:19:58.829815  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:19:58.829864  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:19:58.856999  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:19:58.857027  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:19:58.857033  218693 cri.go:89] found id: ""
	I1122 00:19:58.857044  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:19:58.857093  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.861107  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.865268  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:19:58.865337  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:19:58.892542  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:19:58.892564  218693 cri.go:89] found id: ""
	I1122 00:19:58.892572  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:19:58.892626  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.896771  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:19:58.896846  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:19:58.925628  218693 cri.go:89] found id: ""
	I1122 00:19:58.925652  218693 logs.go:282] 0 containers: []
	W1122 00:19:58.925660  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:19:58.925666  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:19:58.925724  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:19:58.955304  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:19:58.955326  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:19:58.955332  218693 cri.go:89] found id: ""
	I1122 00:19:58.955340  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:19:58.955397  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.959396  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.963562  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:19:58.963626  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:19:58.991860  218693 cri.go:89] found id: ""
	I1122 00:19:58.991883  218693 logs.go:282] 0 containers: []
	W1122 00:19:58.991890  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:19:58.991895  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:19:58.991949  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:19:59.020457  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:19:59.020483  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:19:59.020489  218693 cri.go:89] found id: ""
	I1122 00:19:59.020502  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:19:59.020550  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:59.024967  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:59.031778  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:19:59.031854  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:19:59.061726  218693 cri.go:89] found id: ""
	I1122 00:19:59.061752  218693 logs.go:282] 0 containers: []
	W1122 00:19:59.061763  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:19:59.061771  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:19:59.061831  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:19:59.089141  218693 cri.go:89] found id: ""
	I1122 00:19:59.089164  218693 logs.go:282] 0 containers: []
	W1122 00:19:59.089174  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:19:59.089185  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:19:59.089198  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:19:59.186417  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:19:59.186452  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:19:59.201060  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:19:59.201095  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:19:59.264254  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:19:59.264297  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:19:59.264313  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:19:59.303605  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:19:59.303643  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:19:59.358382  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:19:59.358425  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:19:59.398629  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:19:59.398669  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:19:59.449463  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:19:59.449505  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:19:59.487365  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:19:59.487403  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:19:59.526046  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:19:59.526080  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:19:59.562812  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:19:59.562843  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:19:59.594191  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:19:59.594230  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:02.129372  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:02.129923  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:02.130004  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:02.130071  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:02.161455  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:02.161484  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:02.161490  218693 cri.go:89] found id: ""
	I1122 00:20:02.161501  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:02.161563  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.165824  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.170451  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:02.170522  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:19:58.029853  251199 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-781232" context rescaled to 1 replicas
	W1122 00:19:59.529847  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:01.530493  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:00.520224  247021 node_ready.go:57] node "old-k8s-version-462319" has "Ready":"False" status (will retry)
	I1122 00:20:01.019651  247021 node_ready.go:49] node "old-k8s-version-462319" is "Ready"
	I1122 00:20:01.019681  247021 node_ready.go:38] duration metric: took 14.003330086s for node "old-k8s-version-462319" to be "Ready" ...
	I1122 00:20:01.019696  247021 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:01.019743  247021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:01.032926  247021 api_server.go:72] duration metric: took 14.481952557s to wait for apiserver process to appear ...
	I1122 00:20:01.032954  247021 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:01.032973  247021 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:20:01.039899  247021 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1122 00:20:01.041146  247021 api_server.go:141] control plane version: v1.28.0
	I1122 00:20:01.041172  247021 api_server.go:131] duration metric: took 8.212119ms to wait for apiserver health ...
	I1122 00:20:01.041191  247021 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:20:01.044815  247021 system_pods.go:59] 8 kube-system pods found
	I1122 00:20:01.044853  247021 system_pods.go:61] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.044862  247021 system_pods.go:61] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.044874  247021 system_pods.go:61] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.044879  247021 system_pods.go:61] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.044888  247021 system_pods.go:61] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.044897  247021 system_pods.go:61] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.044901  247021 system_pods.go:61] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.044909  247021 system_pods.go:61] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.044918  247021 system_pods.go:74] duration metric: took 3.718269ms to wait for pod list to return data ...
	I1122 00:20:01.044929  247021 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:20:01.047150  247021 default_sa.go:45] found service account: "default"
	I1122 00:20:01.047173  247021 default_sa.go:55] duration metric: took 2.236156ms for default service account to be created ...
	I1122 00:20:01.047182  247021 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:20:01.050474  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.050506  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.050514  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.050523  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.050528  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.050533  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.050539  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.050544  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.050551  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.050577  247021 retry.go:31] will retry after 205.575764ms: missing components: kube-dns
	I1122 00:20:01.261814  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.261847  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.261859  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.261865  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.261869  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.261873  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.261877  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.261879  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.261884  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.261900  247021 retry.go:31] will retry after 236.21482ms: missing components: kube-dns
	I1122 00:20:01.502877  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.502913  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.502921  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.502929  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.502935  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.502952  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.502957  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.502962  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.502984  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.503005  247021 retry.go:31] will retry after 442.873739ms: missing components: kube-dns
	I1122 00:20:01.950449  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.950483  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.950492  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.950500  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.950505  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.950516  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.950521  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.950526  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.950530  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Running
	I1122 00:20:01.950541  247021 system_pods.go:126] duration metric: took 903.352039ms to wait for k8s-apps to be running ...
	I1122 00:20:01.950553  247021 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:20:01.950602  247021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:01.964580  247021 system_svc.go:56] duration metric: took 14.015441ms WaitForService to wait for kubelet
	I1122 00:20:01.964612  247021 kubeadm.go:587] duration metric: took 15.413644993s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:01.964634  247021 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:20:01.968157  247021 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:20:01.968185  247021 node_conditions.go:123] node cpu capacity is 8
	I1122 00:20:01.968205  247021 node_conditions.go:105] duration metric: took 3.565831ms to run NodePressure ...
	I1122 00:20:01.968227  247021 start.go:242] waiting for startup goroutines ...
	I1122 00:20:01.968237  247021 start.go:247] waiting for cluster config update ...
	I1122 00:20:01.968254  247021 start.go:256] writing updated cluster config ...
	I1122 00:20:01.968545  247021 ssh_runner.go:195] Run: rm -f paused
	I1122 00:20:01.972712  247021 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:01.976920  247021 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pqbfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.983354  247021 pod_ready.go:94] pod "coredns-5dd5756b68-pqbfp" is "Ready"
	I1122 00:20:02.983385  247021 pod_ready.go:86] duration metric: took 1.00643947s for pod "coredns-5dd5756b68-pqbfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.987209  247021 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.992024  247021 pod_ready.go:94] pod "etcd-old-k8s-version-462319" is "Ready"
	I1122 00:20:02.992053  247021 pod_ready.go:86] duration metric: took 4.821819ms for pod "etcd-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.994875  247021 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.998765  247021 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-462319" is "Ready"
	I1122 00:20:02.998789  247021 pod_ready.go:86] duration metric: took 3.892836ms for pod "kube-apiserver-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.001798  247021 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.181579  247021 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-462319" is "Ready"
	I1122 00:20:03.181611  247021 pod_ready.go:86] duration metric: took 179.791243ms for pod "kube-controller-manager-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.381883  247021 pod_ready.go:83] waiting for pod "kube-proxy-kqrng" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.781562  247021 pod_ready.go:94] pod "kube-proxy-kqrng" is "Ready"
	I1122 00:20:03.781594  247021 pod_ready.go:86] duration metric: took 399.684082ms for pod "kube-proxy-kqrng" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.981736  247021 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:04.381559  247021 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-462319" is "Ready"
	I1122 00:20:04.381590  247021 pod_ready.go:86] duration metric: took 399.825883ms for pod "kube-scheduler-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:04.381604  247021 pod_ready.go:40] duration metric: took 2.408861294s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:04.431804  247021 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1122 00:20:04.435233  247021 out.go:203] 
	W1122 00:20:04.436473  247021 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1122 00:20:04.437863  247021 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1122 00:20:04.439555  247021 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-462319" cluster and "default" namespace by default
	I1122 00:20:01.711315  260527 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:20:01.711555  260527 start.go:159] libmachine.API.Create for "embed-certs-491677" (driver="docker")
	I1122 00:20:01.711610  260527 client.go:173] LocalClient.Create starting
	I1122 00:20:01.711685  260527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem
	I1122 00:20:01.711719  260527 main.go:143] libmachine: Decoding PEM data...
	I1122 00:20:01.711737  260527 main.go:143] libmachine: Parsing certificate...
	I1122 00:20:01.711816  260527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem
	I1122 00:20:01.711837  260527 main.go:143] libmachine: Decoding PEM data...
	I1122 00:20:01.711846  260527 main.go:143] libmachine: Parsing certificate...
	I1122 00:20:01.712184  260527 cli_runner.go:164] Run: docker network inspect embed-certs-491677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:20:01.730686  260527 cli_runner.go:211] docker network inspect embed-certs-491677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:20:01.730752  260527 network_create.go:284] running [docker network inspect embed-certs-491677] to gather additional debugging logs...
	I1122 00:20:01.730771  260527 cli_runner.go:164] Run: docker network inspect embed-certs-491677
	W1122 00:20:01.749708  260527 cli_runner.go:211] docker network inspect embed-certs-491677 returned with exit code 1
	I1122 00:20:01.749739  260527 network_create.go:287] error running [docker network inspect embed-certs-491677]: docker network inspect embed-certs-491677: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-491677 not found
	I1122 00:20:01.749755  260527 network_create.go:289] output of [docker network inspect embed-certs-491677]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-491677 not found
	
	** /stderr **
	I1122 00:20:01.749902  260527 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:20:01.769006  260527 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1df6c22ede91 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:c7:f4:a5:24:54} reservation:<nil>}
	I1122 00:20:01.769731  260527 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7d48551462a8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:3b:0e:74:ee:57} reservation:<nil>}
	I1122 00:20:01.770416  260527 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c50004b7f5b6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:73:1e:0d:b7:11} reservation:<nil>}
	I1122 00:20:01.771113  260527 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-166d2f324fb5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:da:99:1e:87:6f} reservation:<nil>}
	I1122 00:20:01.771891  260527 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ebca10}
	I1122 00:20:01.771919  260527 network_create.go:124] attempt to create docker network embed-certs-491677 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1122 00:20:01.771970  260527 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-491677 embed-certs-491677
	I1122 00:20:01.823460  260527 network_create.go:108] docker network embed-certs-491677 192.168.85.0/24 created
	I1122 00:20:01.823495  260527 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-491677" container
	I1122 00:20:01.823677  260527 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:20:01.843300  260527 cli_runner.go:164] Run: docker volume create embed-certs-491677 --label name.minikube.sigs.k8s.io=embed-certs-491677 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:20:01.863723  260527 oci.go:103] Successfully created a docker volume embed-certs-491677
	I1122 00:20:01.863797  260527 cli_runner.go:164] Run: docker run --rm --name embed-certs-491677-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-491677 --entrypoint /usr/bin/test -v embed-certs-491677:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:20:02.270865  260527 oci.go:107] Successfully prepared a docker volume embed-certs-491677
	I1122 00:20:02.270965  260527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:02.270986  260527 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:20:02.271058  260527 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-491677:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:20:02.204729  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:02.204756  218693 cri.go:89] found id: ""
	I1122 00:20:02.204766  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:02.204829  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.209535  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:02.209603  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:02.247383  218693 cri.go:89] found id: ""
	I1122 00:20:02.247408  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.247416  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:02.247422  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:02.247484  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:02.277440  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:02.277466  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:02.277473  218693 cri.go:89] found id: ""
	I1122 00:20:02.277483  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:02.277545  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.282049  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.286514  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:02.286581  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:02.316706  218693 cri.go:89] found id: ""
	I1122 00:20:02.316733  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.316744  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:02.316753  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:02.316813  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:02.347451  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:02.347471  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:02.347476  218693 cri.go:89] found id: ""
	I1122 00:20:02.347486  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:02.347542  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.352378  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.356502  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:02.356561  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:02.384778  218693 cri.go:89] found id: ""
	I1122 00:20:02.384802  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.384814  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:02.384825  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:02.384887  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:02.421102  218693 cri.go:89] found id: ""
	I1122 00:20:02.421131  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.421143  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:02.421156  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:02.421171  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:02.477880  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:02.477924  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:02.574856  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:02.574892  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:02.641120  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:02.641142  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:02.641154  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:02.681648  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:02.681686  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:02.739093  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:02.739128  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:02.774358  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:02.774395  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:02.810272  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:02.810310  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:02.842900  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:02.842942  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:02.857743  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:02.857784  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:02.894229  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:02.894272  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:02.929523  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:02.929555  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:05.459958  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:05.460532  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:05.460597  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:05.460676  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:05.488636  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:05.488658  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:05.488662  218693 cri.go:89] found id: ""
	I1122 00:20:05.488670  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:05.488715  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.492971  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.496804  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:05.496876  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:05.524856  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:05.524883  218693 cri.go:89] found id: ""
	I1122 00:20:05.524902  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:05.524962  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.529434  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:05.529521  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:05.557780  218693 cri.go:89] found id: ""
	I1122 00:20:05.557805  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.557819  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:05.557828  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:05.557885  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:05.586142  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:05.586166  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:05.586173  218693 cri.go:89] found id: ""
	I1122 00:20:05.586184  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:05.586248  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.590458  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.594671  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:05.594752  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:05.623542  218693 cri.go:89] found id: ""
	I1122 00:20:05.623565  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.623575  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:05.623585  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:05.623653  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:05.651642  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:05.651663  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:05.651666  218693 cri.go:89] found id: ""
	I1122 00:20:05.651674  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:05.651724  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.655785  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.659668  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:05.659743  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:05.687725  218693 cri.go:89] found id: ""
	I1122 00:20:05.687748  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.687756  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:05.687762  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:05.687810  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:05.714403  218693 cri.go:89] found id: ""
	I1122 00:20:05.714432  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.714444  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:05.714457  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:05.714472  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:05.748851  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:05.748901  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:05.784862  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:05.784899  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:05.813532  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:05.813569  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:05.844930  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:05.844965  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:05.897273  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:05.897337  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:05.935381  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:05.935417  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:06.025566  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:06.025612  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:06.040810  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:06.040843  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:06.102006  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:06.102032  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:06.102050  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:06.136887  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:06.136937  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:06.192634  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:06.192674  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	W1122 00:20:04.029159  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:06.067087  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	I1122 00:20:06.722373  260527 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-491677:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.451238931s)
	I1122 00:20:06.722412  260527 kic.go:203] duration metric: took 4.451422839s to extract preloaded images to volume ...
	W1122 00:20:06.722533  260527 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:20:06.722570  260527 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:20:06.722615  260527 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:20:06.782296  260527 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-491677 --name embed-certs-491677 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-491677 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-491677 --network embed-certs-491677 --ip 192.168.85.2 --volume embed-certs-491677:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:20:07.109552  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Running}}
	I1122 00:20:07.129178  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Status}}
	I1122 00:20:07.148399  260527 cli_runner.go:164] Run: docker exec embed-certs-491677 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:20:07.196229  260527 oci.go:144] the created container "embed-certs-491677" has a running status.
	I1122 00:20:07.196362  260527 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa...
	I1122 00:20:07.257446  260527 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:20:07.289218  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Status}}
	I1122 00:20:07.310559  260527 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:20:07.310578  260527 kic_runner.go:114] Args: [docker exec --privileged embed-certs-491677 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:20:07.351585  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Status}}
	I1122 00:20:07.374469  260527 machine.go:94] provisionDockerMachine start ...
	I1122 00:20:07.374754  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:07.397641  260527 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:07.397885  260527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:20:07.397902  260527 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:20:07.398578  260527 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36770->127.0.0.1:33073: read: connection reset by peer
	I1122 00:20:10.523553  260527 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-491677
	
	I1122 00:20:10.523587  260527 ubuntu.go:182] provisioning hostname "embed-certs-491677"
	I1122 00:20:10.523652  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:10.544251  260527 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:10.544519  260527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:20:10.544536  260527 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-491677 && echo "embed-certs-491677" | sudo tee /etc/hostname
	I1122 00:20:10.679747  260527 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-491677
	
	I1122 00:20:10.679832  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:10.700586  260527 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:10.700833  260527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:20:10.700858  260527 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-491677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-491677/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-491677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:20:10.825289  260527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:20:10.825326  260527 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9059/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9059/.minikube}
	I1122 00:20:10.825375  260527 ubuntu.go:190] setting up certificates
	I1122 00:20:10.825411  260527 provision.go:84] configureAuth start
	I1122 00:20:10.825489  260527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-491677
	I1122 00:20:10.844220  260527 provision.go:143] copyHostCerts
	I1122 00:20:10.844298  260527 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem, removing ...
	I1122 00:20:10.844307  260527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem
	I1122 00:20:10.844403  260527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem (1082 bytes)
	I1122 00:20:10.844496  260527 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem, removing ...
	I1122 00:20:10.844506  260527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem
	I1122 00:20:10.844532  260527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem (1123 bytes)
	I1122 00:20:10.844590  260527 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem, removing ...
	I1122 00:20:10.844598  260527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem
	I1122 00:20:10.844620  260527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem (1679 bytes)
	I1122 00:20:10.844669  260527 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem org=jenkins.embed-certs-491677 san=[127.0.0.1 192.168.85.2 embed-certs-491677 localhost minikube]
	I1122 00:20:10.881095  260527 provision.go:177] copyRemoteCerts
	I1122 00:20:10.881150  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:20:10.881198  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:10.899974  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:10.993091  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:20:11.014763  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1122 00:20:11.034702  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:20:11.053678  260527 provision.go:87] duration metric: took 228.246896ms to configureAuth
	I1122 00:20:11.053708  260527 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:20:11.053892  260527 config.go:182] Loaded profile config "embed-certs-491677": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:11.053909  260527 machine.go:97] duration metric: took 3.67941396s to provisionDockerMachine
	I1122 00:20:11.053917  260527 client.go:176] duration metric: took 9.342299036s to LocalClient.Create
	I1122 00:20:11.053943  260527 start.go:167] duration metric: took 9.342388491s to libmachine.API.Create "embed-certs-491677"
	I1122 00:20:11.053956  260527 start.go:293] postStartSetup for "embed-certs-491677" (driver="docker")
	I1122 00:20:11.053984  260527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:20:11.054052  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:20:11.054103  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.073167  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:11.168158  260527 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:20:11.172076  260527 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:20:11.172422  260527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:20:11.172459  260527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/addons for local assets ...
	I1122 00:20:11.172556  260527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/files for local assets ...
	I1122 00:20:11.172675  260527 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem -> 145302.pem in /etc/ssl/certs
	I1122 00:20:11.172811  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:20:11.182207  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:20:11.203784  260527 start.go:296] duration metric: took 149.811059ms for postStartSetup
	I1122 00:20:11.204173  260527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-491677
	I1122 00:20:11.222954  260527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/config.json ...
	I1122 00:20:11.223305  260527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:20:11.223354  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.242018  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:11.333726  260527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:20:11.338527  260527 start.go:128] duration metric: took 9.62936097s to createHost
	I1122 00:20:11.338558  260527 start.go:83] releasing machines lock for "embed-certs-491677", held for 9.629502399s
	I1122 00:20:11.338631  260527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-491677
	I1122 00:20:11.357563  260527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:20:11.357634  260527 ssh_runner.go:195] Run: cat /version.json
	I1122 00:20:11.357684  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.357690  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.377098  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:11.378067  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:08.727161  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:08.727652  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:08.727710  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:08.727762  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:08.754498  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:08.754522  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:08.754527  218693 cri.go:89] found id: ""
	I1122 00:20:08.754535  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:08.754583  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.758867  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.762449  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:08.762501  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:08.788422  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:08.788444  218693 cri.go:89] found id: ""
	I1122 00:20:08.788455  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:08.788512  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.792603  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:08.792668  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:08.820677  218693 cri.go:89] found id: ""
	I1122 00:20:08.820703  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.820711  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:08.820717  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:08.820769  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:08.848396  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:08.848418  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:08.848422  218693 cri.go:89] found id: ""
	I1122 00:20:08.848429  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:08.848485  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.852633  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.856393  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:08.856469  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:08.884423  218693 cri.go:89] found id: ""
	I1122 00:20:08.884454  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.884467  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:08.884476  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:08.884529  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:08.911898  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:08.911917  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:08.911921  218693 cri.go:89] found id: ""
	I1122 00:20:08.911928  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:08.912000  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.916097  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.919808  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:08.919868  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:08.945704  218693 cri.go:89] found id: ""
	I1122 00:20:08.945731  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.945742  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:08.945750  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:08.945811  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:08.971599  218693 cri.go:89] found id: ""
	I1122 00:20:08.971630  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.971642  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:08.971658  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:08.971686  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:08.985779  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:08.985806  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:09.018373  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:09.018407  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:09.055328  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:09.055359  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:09.098567  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:09.098608  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:09.183392  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:09.183433  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:09.242636  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:09.242654  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:09.242666  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:09.276133  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:09.276179  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:09.310731  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:09.310769  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:09.362187  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:09.362226  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:09.391737  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:09.391763  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:09.425753  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:09.425787  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:11.959328  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:11.959805  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:11.959868  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:11.959935  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:11.993113  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:11.993137  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:11.993143  218693 cri.go:89] found id: ""
	I1122 00:20:11.993153  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:11.993213  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:11.997946  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.002616  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:12.002741  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:12.040113  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:12.040150  218693 cri.go:89] found id: ""
	I1122 00:20:12.040160  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:12.040220  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.045665  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:12.045732  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:12.081343  218693 cri.go:89] found id: ""
	I1122 00:20:12.081375  218693 logs.go:282] 0 containers: []
	W1122 00:20:12.081384  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:12.081389  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:12.081449  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:12.116486  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:12.117024  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:12.117045  218693 cri.go:89] found id: ""
	I1122 00:20:12.117055  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:12.117115  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.121469  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.125453  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:12.125520  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:12.159076  218693 cri.go:89] found id: ""
	I1122 00:20:12.159108  218693 logs.go:282] 0 containers: []
	W1122 00:20:12.159121  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:12.159130  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:12.159191  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:11.523900  260527 ssh_runner.go:195] Run: systemctl --version
	I1122 00:20:11.531084  260527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:20:11.536010  260527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:20:11.536130  260527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:20:11.563766  260527 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:20:11.563792  260527 start.go:496] detecting cgroup driver to use...
	I1122 00:20:11.563830  260527 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:20:11.563873  260527 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:20:11.579543  260527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:20:11.593598  260527 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:20:11.593666  260527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:20:11.610889  260527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:20:11.629723  260527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:20:11.730670  260527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:20:11.819921  260527 docker.go:234] disabling docker service ...
	I1122 00:20:11.819985  260527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:20:11.839159  260527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:20:11.854142  260527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:20:11.943699  260527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:20:12.053855  260527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:20:12.073171  260527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:20:12.089999  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:20:12.105012  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:20:12.117591  260527 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1122 00:20:12.117652  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1122 00:20:12.128817  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:20:12.142147  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:20:12.154635  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:20:12.169029  260527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:20:12.181631  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:20:12.194568  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:20:12.207294  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:20:12.218684  260527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:20:12.228679  260527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:20:12.241707  260527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:20:12.337447  260527 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:20:12.443801  260527 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:20:12.443870  260527 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:20:12.448114  260527 start.go:564] Will wait 60s for crictl version
	I1122 00:20:12.448178  260527 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.452113  260527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:20:12.481619  260527 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:20:12.481687  260527 ssh_runner.go:195] Run: containerd --version
	I1122 00:20:12.506954  260527 ssh_runner.go:195] Run: containerd --version
	I1122 00:20:12.537127  260527 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1122 00:20:08.528688  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:10.529626  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	I1122 00:20:12.029744  251199 node_ready.go:49] node "no-preload-781232" is "Ready"
	I1122 00:20:12.029782  251199 node_ready.go:38] duration metric: took 14.503754974s for node "no-preload-781232" to be "Ready" ...
	I1122 00:20:12.029799  251199 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:12.029867  251199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:12.049755  251199 api_server.go:72] duration metric: took 14.826557708s to wait for apiserver process to appear ...
	I1122 00:20:12.049782  251199 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:12.049803  251199 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1122 00:20:12.055733  251199 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1122 00:20:12.057374  251199 api_server.go:141] control plane version: v1.34.1
	I1122 00:20:12.057405  251199 api_server.go:131] duration metric: took 7.61544ms to wait for apiserver health ...
	I1122 00:20:12.057416  251199 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:20:12.062154  251199 system_pods.go:59] 8 kube-system pods found
	I1122 00:20:12.062190  251199 system_pods.go:61] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:12.062199  251199 system_pods.go:61] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.062207  251199 system_pods.go:61] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.062212  251199 system_pods.go:61] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.062218  251199 system_pods.go:61] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.062223  251199 system_pods.go:61] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.062228  251199 system_pods.go:61] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.062237  251199 system_pods.go:61] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:12.062245  251199 system_pods.go:74] duration metric: took 4.821603ms to wait for pod list to return data ...
	I1122 00:20:12.062254  251199 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:20:12.065112  251199 default_sa.go:45] found service account: "default"
	I1122 00:20:12.065138  251199 default_sa.go:55] duration metric: took 2.848928ms for default service account to be created ...
	I1122 00:20:12.065149  251199 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:20:12.069582  251199 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:12.069625  251199 system_pods.go:89] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:12.069633  251199 system_pods.go:89] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.069648  251199 system_pods.go:89] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.069655  251199 system_pods.go:89] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.069661  251199 system_pods.go:89] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.069666  251199 system_pods.go:89] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.069670  251199 system_pods.go:89] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.069676  251199 system_pods.go:89] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:12.069728  251199 retry.go:31] will retry after 227.269849ms: missing components: kube-dns
	I1122 00:20:12.301834  251199 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:12.301869  251199 system_pods.go:89] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:12.301877  251199 system_pods.go:89] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.301886  251199 system_pods.go:89] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.301892  251199 system_pods.go:89] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.301898  251199 system_pods.go:89] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.301903  251199 system_pods.go:89] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.301910  251199 system_pods.go:89] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.301917  251199 system_pods.go:89] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:12.301938  251199 retry.go:31] will retry after 387.887736ms: missing components: kube-dns
	I1122 00:20:12.694992  251199 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:12.695026  251199 system_pods.go:89] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Running
	I1122 00:20:12.695035  251199 system_pods.go:89] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.695041  251199 system_pods.go:89] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.695047  251199 system_pods.go:89] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.695052  251199 system_pods.go:89] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.695060  251199 system_pods.go:89] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.695065  251199 system_pods.go:89] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.695070  251199 system_pods.go:89] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Running
	I1122 00:20:12.695080  251199 system_pods.go:126] duration metric: took 629.924123ms to wait for k8s-apps to be running ...
	I1122 00:20:12.695093  251199 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:20:12.695144  251199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:12.708823  251199 system_svc.go:56] duration metric: took 13.721013ms WaitForService to wait for kubelet
	I1122 00:20:12.708855  251199 kubeadm.go:587] duration metric: took 15.485663176s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:12.708874  251199 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:20:12.712345  251199 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:20:12.712376  251199 node_conditions.go:123] node cpu capacity is 8
	I1122 00:20:12.712396  251199 node_conditions.go:105] duration metric: took 3.516354ms to run NodePressure ...
	I1122 00:20:12.712412  251199 start.go:242] waiting for startup goroutines ...
	I1122 00:20:12.712423  251199 start.go:247] waiting for cluster config update ...
	I1122 00:20:12.712441  251199 start.go:256] writing updated cluster config ...
	I1122 00:20:12.712733  251199 ssh_runner.go:195] Run: rm -f paused
	I1122 00:20:12.717390  251199 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:12.721696  251199 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9wcct" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.726947  251199 pod_ready.go:94] pod "coredns-66bc5c9577-9wcct" is "Ready"
	I1122 00:20:12.726976  251199 pod_ready.go:86] duration metric: took 5.255643ms for pod "coredns-66bc5c9577-9wcct" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.729559  251199 pod_ready.go:83] waiting for pod "etcd-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.734425  251199 pod_ready.go:94] pod "etcd-no-preload-781232" is "Ready"
	I1122 00:20:12.734455  251199 pod_ready.go:86] duration metric: took 4.86467ms for pod "etcd-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.736916  251199 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.741485  251199 pod_ready.go:94] pod "kube-apiserver-no-preload-781232" is "Ready"
	I1122 00:20:12.741515  251199 pod_ready.go:86] duration metric: took 4.574913ms for pod "kube-apiserver-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.743848  251199 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:13.121924  251199 pod_ready.go:94] pod "kube-controller-manager-no-preload-781232" is "Ready"
	I1122 00:20:13.121957  251199 pod_ready.go:86] duration metric: took 378.084436ms for pod "kube-controller-manager-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:13.322463  251199 pod_ready.go:83] waiting for pod "kube-proxy-685jg" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:13.721973  251199 pod_ready.go:94] pod "kube-proxy-685jg" is "Ready"
	I1122 00:20:13.722003  251199 pod_ready.go:86] duration metric: took 399.513258ms for pod "kube-proxy-685jg" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:13.922497  251199 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:14.322798  251199 pod_ready.go:94] pod "kube-scheduler-no-preload-781232" is "Ready"
	I1122 00:20:14.322835  251199 pod_ready.go:86] duration metric: took 400.307889ms for pod "kube-scheduler-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:14.322851  251199 pod_ready.go:40] duration metric: took 1.605427799s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:14.392629  251199 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:20:14.394856  251199 out.go:179] * Done! kubectl is now configured to use "no-preload-781232" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	d01de905a2d07       56cc512116c8f       8 seconds ago       Running             busybox                   0                   e511b813570c1       busybox                                          default
	f7527a8afc668       ead0a4a53df89       15 seconds ago      Running             coredns                   0                   b00fa05a6c375       coredns-5dd5756b68-pqbfp                         kube-system
	f2a1ec178c227       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   a3bbedf747991       storage-provisioner                              kube-system
	abad042f2a4ad       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   721fcd34a44d6       kindnet-ldtd8                                    kube-system
	5119ee9a69fb3       ea1030da44aa1       29 seconds ago      Running             kube-proxy                0                   be780c30602ce       kube-proxy-kqrng                                 kube-system
	4c35680ab2dd6       73deb9a3f7025       49 seconds ago      Running             etcd                      0                   adbbfe9941b27       etcd-old-k8s-version-462319                      kube-system
	1863b35aae093       f6f496300a2ae       49 seconds ago      Running             kube-scheduler            0                   45afb7772f575       kube-scheduler-old-k8s-version-462319            kube-system
	e398c42ad8188       bb5e0dde9054c       49 seconds ago      Running             kube-apiserver            0                   0ce7c78109ce7       kube-apiserver-old-k8s-version-462319            kube-system
	355ecffe75a3f       4be79c38a4bab       49 seconds ago      Running             kube-controller-manager   0                   5dfd6ffd80d1f       kube-controller-manager-old-k8s-version-462319   kube-system
	
	
	==> containerd <==
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.327237013Z" level=info msg="connecting to shim f2a1ec178c227617bd32e678c94e3d44e606683f0b10ccdbc182dec6d6d5c9e9" address="unix:///run/containerd/s/62835cccd20d8437bb636df9ea457fe2506fdd9387d47f5e31a45c75f852a444" protocol=ttrpc version=3
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.328631129Z" level=info msg="CreateContainer within sandbox \"b00fa05a6c375cb07b56b89e739f90401ad7f950dedcb886ca1774eba46a4293\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.336790890Z" level=info msg="Container f7527a8afc6683a9935b781bf3006cc9c368a534f3eafba3501b6509659a437b: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.343474448Z" level=info msg="CreateContainer within sandbox \"b00fa05a6c375cb07b56b89e739f90401ad7f950dedcb886ca1774eba46a4293\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7527a8afc6683a9935b781bf3006cc9c368a534f3eafba3501b6509659a437b\""
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.344107519Z" level=info msg="StartContainer for \"f7527a8afc6683a9935b781bf3006cc9c368a534f3eafba3501b6509659a437b\""
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.345166179Z" level=info msg="connecting to shim f7527a8afc6683a9935b781bf3006cc9c368a534f3eafba3501b6509659a437b" address="unix:///run/containerd/s/39593751a6c9fe87428291df6153bccdab6c22a754601ae94cfc40e697ece6ec" protocol=ttrpc version=3
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.389133316Z" level=info msg="StartContainer for \"f2a1ec178c227617bd32e678c94e3d44e606683f0b10ccdbc182dec6d6d5c9e9\" returns successfully"
	Nov 22 00:20:01 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:01.404040136Z" level=info msg="StartContainer for \"f7527a8afc6683a9935b781bf3006cc9c368a534f3eafba3501b6509659a437b\" returns successfully"
	Nov 22 00:20:05 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:05.083706178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:89dd9411-148d-4a8e-98d3-a51a8eab9d35,Namespace:default,Attempt:0,}"
	Nov 22 00:20:05 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:05.877683353Z" level=info msg="connecting to shim e511b813570c19e1d5c5c2002304caba5cc1bac5847092a53135ba9cb1b1dd7c" address="unix:///run/containerd/s/b045fc79abfabe20fc9affb730c643e7c442531994f349b7904cd7f34ab0272a" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:20:06 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:06.066243350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:89dd9411-148d-4a8e-98d3-a51a8eab9d35,Namespace:default,Attempt:0,} returns sandbox id \"e511b813570c19e1d5c5c2002304caba5cc1bac5847092a53135ba9cb1b1dd7c\""
	Nov 22 00:20:06 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:06.068244404Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.300595484Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.301398927Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396644"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.302750252Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.304853958Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.305213907Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.236905893s"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.305247082Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.306892429Z" level=info msg="CreateContainer within sandbox \"e511b813570c19e1d5c5c2002304caba5cc1bac5847092a53135ba9cb1b1dd7c\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.314973197Z" level=info msg="Container d01de905a2d0700ad9691d5a73cf41f69bb587ec67e218858862ae31fcd53485: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.321465429Z" level=info msg="CreateContainer within sandbox \"e511b813570c19e1d5c5c2002304caba5cc1bac5847092a53135ba9cb1b1dd7c\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"d01de905a2d0700ad9691d5a73cf41f69bb587ec67e218858862ae31fcd53485\""
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.322134703Z" level=info msg="StartContainer for \"d01de905a2d0700ad9691d5a73cf41f69bb587ec67e218858862ae31fcd53485\""
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.323141205Z" level=info msg="connecting to shim d01de905a2d0700ad9691d5a73cf41f69bb587ec67e218858862ae31fcd53485" address="unix:///run/containerd/s/b045fc79abfabe20fc9affb730c643e7c442531994f349b7904cd7f34ab0272a" protocol=ttrpc version=3
	Nov 22 00:20:08 old-k8s-version-462319 containerd[666]: time="2025-11-22T00:20:08.376916692Z" level=info msg="StartContainer for \"d01de905a2d0700ad9691d5a73cf41f69bb587ec67e218858862ae31fcd53485\" returns successfully"
	Nov 22 00:20:13 old-k8s-version-462319 containerd[666]: E1122 00:20:13.803924     666 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [f7527a8afc6683a9935b781bf3006cc9c368a534f3eafba3501b6509659a437b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60216 - 50495 "HINFO IN 8122801349455611517.3511563579879947437. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074291599s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-462319
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-462319
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=old-k8s-version-462319
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_19_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:19:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-462319
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:20:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:20:04 +0000   Sat, 22 Nov 2025 00:19:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:20:04 +0000   Sat, 22 Nov 2025 00:19:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:20:04 +0000   Sat, 22 Nov 2025 00:19:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:20:04 +0000   Sat, 22 Nov 2025 00:20:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-462319
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                1a763c28-0497-45f3-b9e8-458b8b4eb589
	  Boot ID:                    725aae03-f893-4e0b-b029-cbd3b00ccfdd
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-pqbfp                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-old-k8s-version-462319                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         44s
	  kube-system                 kindnet-ldtd8                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-old-k8s-version-462319             250m (3%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-controller-manager-old-k8s-version-462319    200m (2%)     0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-proxy-kqrng                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-old-k8s-version-462319             100m (1%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 44s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  44s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  44s   kubelet          Node old-k8s-version-462319 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s   kubelet          Node old-k8s-version-462319 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s   kubelet          Node old-k8s-version-462319 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s   node-controller  Node old-k8s-version-462319 event: Registered Node old-k8s-version-462319 in Controller
	  Normal  NodeReady                17s   kubelet          Node old-k8s-version-462319 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000865] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.410276] i8042: Warning: Keylock active
	[  +0.014947] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.495836] block sda: the capability attribute has been deprecated.
	[  +0.091740] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024333] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.452540] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [4c35680ab2dd6966de549749b29af9a5a8bccb172d03360ef57391e45ea9f885] <==
	{"level":"info","ts":"2025-11-22T00:19:28.060277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-11-22T00:19:28.060288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-22T00:19:28.061026Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:19:28.061614Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:19:28.061614Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-462319 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-22T00:19:28.061648Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:19:28.06183Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:19:28.0621Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:19:28.062388Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:19:28.062242Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-22T00:19:28.062743Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-22T00:19:28.064288Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-22T00:19:28.064366Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-22T00:19:32.633697Z","caller":"traceutil/trace.go:171","msg":"trace[64928526] transaction","detail":"{read_only:false; response_revision:210; number_of_response:1; }","duration":"260.007025ms","start":"2025-11-22T00:19:32.373672Z","end":"2025-11-22T00:19:32.633679Z","steps":["trace[64928526] 'process raft request'  (duration: 259.898405ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:19:33.081079Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"335.74286ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790177431359743 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/secrets/kube-system/bootstrap-token-vumgow\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kube-system/bootstrap-token-vumgow\" value_size:617 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-22T00:19:33.081182Z","caller":"traceutil/trace.go:171","msg":"trace[454440905] transaction","detail":"{read_only:false; response_revision:211; number_of_response:1; }","duration":"441.168552ms","start":"2025-11-22T00:19:32.639997Z","end":"2025-11-22T00:19:33.081166Z","steps":["trace[454440905] 'process raft request'  (duration: 104.950033ms)","trace[454440905] 'compare'  (duration: 335.635432ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-22T00:19:33.081293Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-11-22T00:19:32.63998Z","time spent":"441.252908ms","remote":"127.0.0.1:42828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/secrets/kube-system/bootstrap-token-vumgow\" mod_revision:0 > success:<request_put:<key:\"/registry/secrets/kube-system/bootstrap-token-vumgow\" value_size:617 >> failure:<>"}
	{"level":"warn","ts":"2025-11-22T00:19:44.266771Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.299403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:19:44.266864Z","caller":"traceutil/trace.go:171","msg":"trace[842289003] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:0; response_revision:282; }","duration":"130.453771ms","start":"2025-11-22T00:19:44.136394Z","end":"2025-11-22T00:19:44.266847Z","steps":["trace[842289003] 'range keys from in-memory index tree'  (duration: 130.216573ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:19:44.386458Z","caller":"traceutil/trace.go:171","msg":"trace[490276607] linearizableReadLoop","detail":"{readStateIndex:296; appliedIndex:295; }","duration":"101.94453ms","start":"2025-11-22T00:19:44.284493Z","end":"2025-11-22T00:19:44.386437Z","steps":["trace[490276607] 'read index received'  (duration: 101.776407ms)","trace[490276607] 'applied index is now lower than readState.Index'  (duration: 167.67µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:19:44.386547Z","caller":"traceutil/trace.go:171","msg":"trace[1514742623] transaction","detail":"{read_only:false; response_revision:283; number_of_response:1; }","duration":"114.786396ms","start":"2025-11-22T00:19:44.271741Z","end":"2025-11-22T00:19:44.386527Z","steps":["trace[1514742623] 'process raft request'  (duration: 114.589176ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:19:44.386605Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.121151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:19:44.386631Z","caller":"traceutil/trace.go:171","msg":"trace[800592602] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:283; }","duration":"102.163591ms","start":"2025-11-22T00:19:44.284459Z","end":"2025-11-22T00:19:44.386622Z","steps":["trace[800592602] 'agreement among raft nodes before linearized reading'  (duration: 102.059746ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:20:06.401485Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.691938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-22T00:20:06.401571Z","caller":"traceutil/trace.go:171","msg":"trace[919203119] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:425; }","duration":"116.801997ms","start":"2025-11-22T00:20:06.284749Z","end":"2025-11-22T00:20:06.401551Z","steps":["trace[919203119] 'range keys from in-memory index tree'  (duration: 116.607287ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:20:17 up  1:02,  0 user,  load average: 6.48, 3.76, 2.29
	Linux old-k8s-version-462319 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [abad042f2a4adf0bb5a1e42eb6090d0433dbd093e2502e0a0763cd88008fa485] <==
	I1122 00:19:50.358053       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:19:50.379516       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1122 00:19:50.379673       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:19:50.379699       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:19:50.379728       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:19:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:19:50.657926       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:19:50.657947       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:19:50.657972       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:19:50.658082       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:19:50.980378       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:19:50.980413       1 metrics.go:72] Registering metrics
	I1122 00:19:50.980477       1 controller.go:711] "Syncing nftables rules"
	I1122 00:20:00.663360       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1122 00:20:00.663424       1 main.go:301] handling current node
	I1122 00:20:10.657535       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1122 00:20:10.657598       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e398c42ad8188a2a96d101f089a0968d374f75b6827a154f004bd956b9155274] <==
	I1122 00:19:29.739253       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1122 00:19:29.739494       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:19:29.739756       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1122 00:19:29.739791       1 aggregator.go:166] initial CRD sync complete...
	I1122 00:19:29.739800       1 autoregister_controller.go:141] Starting autoregister controller
	I1122 00:19:29.739807       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:19:29.739814       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:19:29.740221       1 controller.go:624] quota admission added evaluator for: namespaces
	I1122 00:19:29.740304       1 shared_informer.go:318] Caches are synced for configmaps
	I1122 00:19:29.936021       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:19:30.645531       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:19:30.649522       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:19:30.649546       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:19:31.151928       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:19:31.192786       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:19:31.249628       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:19:31.255812       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1122 00:19:31.257056       1 controller.go:624] quota admission added evaluator for: endpoints
	I1122 00:19:31.261743       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:19:31.700612       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1122 00:19:33.349558       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1122 00:19:33.363593       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:19:33.376299       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1122 00:19:46.344730       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:19:46.397570       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [355ecffe75a3ff0874bfe775cd91a06b9bfff9f2dc65c709c3da1adca76e11c1] <==
	I1122 00:19:45.646325       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:19:45.687399       1 shared_informer.go:318] Caches are synced for disruption
	I1122 00:19:45.693911       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:19:46.009572       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:19:46.084787       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:19:46.084820       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1122 00:19:46.355549       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kqrng"
	I1122 00:19:46.357410       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-ldtd8"
	I1122 00:19:46.402945       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1122 00:19:46.497513       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-pqbfp"
	I1122 00:19:46.505494       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bjgv6"
	I1122 00:19:46.515365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.69029ms"
	I1122 00:19:46.537252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.812757ms"
	I1122 00:19:46.537541       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="177.843µs"
	I1122 00:19:47.048823       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1122 00:19:47.070179       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-bjgv6"
	I1122 00:19:47.078565       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="31.08623ms"
	I1122 00:19:47.085902       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.261706ms"
	I1122 00:19:47.086048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.581µs"
	I1122 00:20:00.892386       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.286µs"
	I1122 00:20:00.912888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.033µs"
	I1122 00:20:01.551233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.993µs"
	I1122 00:20:02.562092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.330757ms"
	I1122 00:20:02.562207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="69.9µs"
	I1122 00:20:05.541105       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [5119ee9a69fb309c6fe6c40bfdf7853c1d5fd0390280d45b28a695bd3259a0c0] <==
	I1122 00:19:47.043350       1 server_others.go:69] "Using iptables proxy"
	I1122 00:19:47.061630       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1122 00:19:47.101193       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:19:47.103704       1 server_others.go:152] "Using iptables Proxier"
	I1122 00:19:47.103745       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1122 00:19:47.103755       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1122 00:19:47.103806       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1122 00:19:47.104104       1 server.go:846] "Version info" version="v1.28.0"
	I1122 00:19:47.104124       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:19:47.104828       1 config.go:188] "Starting service config controller"
	I1122 00:19:47.104867       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1122 00:19:47.104926       1 config.go:97] "Starting endpoint slice config controller"
	I1122 00:19:47.104932       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1122 00:19:47.105174       1 config.go:315] "Starting node config controller"
	I1122 00:19:47.105210       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1122 00:19:47.205514       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1122 00:19:47.205516       1 shared_informer.go:318] Caches are synced for service config
	I1122 00:19:47.205561       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [1863b35aae093f7c8f897de1e1301f7582ed68975578bf5d2f19a845b5bbb715] <==
	W1122 00:19:29.717451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1122 00:19:29.717478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1122 00:19:29.717458       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1122 00:19:29.717515       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1122 00:19:29.717553       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1122 00:19:29.717616       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1122 00:19:29.717652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1122 00:19:29.717675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1122 00:19:30.562109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1122 00:19:30.562139       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1122 00:19:30.586044       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1122 00:19:30.586087       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1122 00:19:30.770112       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1122 00:19:30.770162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1122 00:19:30.772555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1122 00:19:30.772599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1122 00:19:30.781374       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1122 00:19:30.781431       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:19:30.807504       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1122 00:19:30.807533       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1122 00:19:30.845180       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1122 00:19:30.845236       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1122 00:19:30.871051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1122 00:19:30.871090       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1122 00:19:33.910375       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 22 00:19:45 old-k8s-version-462319 kubelet[1521]: I1122 00:19:45.613796    1521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.364926    1521 topology_manager.go:215] "Topology Admit Handler" podUID="643cd348-4af3-4720-af0d-e931f184742c" podNamespace="kube-system" podName="kube-proxy-kqrng"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.365817    1521 topology_manager.go:215] "Topology Admit Handler" podUID="6bf161d2-c442-466d-98b8-c313a127bf22" podNamespace="kube-system" podName="kindnet-ldtd8"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.396776    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-295rj\" (UniqueName: \"kubernetes.io/projected/643cd348-4af3-4720-af0d-e931f184742c-kube-api-access-295rj\") pod \"kube-proxy-kqrng\" (UID: \"643cd348-4af3-4720-af0d-e931f184742c\") " pod="kube-system/kube-proxy-kqrng"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.398874    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/643cd348-4af3-4720-af0d-e931f184742c-lib-modules\") pod \"kube-proxy-kqrng\" (UID: \"643cd348-4af3-4720-af0d-e931f184742c\") " pod="kube-system/kube-proxy-kqrng"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.398955    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6bf161d2-c442-466d-98b8-c313a127bf22-cni-cfg\") pod \"kindnet-ldtd8\" (UID: \"6bf161d2-c442-466d-98b8-c313a127bf22\") " pod="kube-system/kindnet-ldtd8"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.398980    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bf161d2-c442-466d-98b8-c313a127bf22-xtables-lock\") pod \"kindnet-ldtd8\" (UID: \"6bf161d2-c442-466d-98b8-c313a127bf22\") " pod="kube-system/kindnet-ldtd8"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.399025    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bf161d2-c442-466d-98b8-c313a127bf22-lib-modules\") pod \"kindnet-ldtd8\" (UID: \"6bf161d2-c442-466d-98b8-c313a127bf22\") " pod="kube-system/kindnet-ldtd8"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.399054    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/643cd348-4af3-4720-af0d-e931f184742c-kube-proxy\") pod \"kube-proxy-kqrng\" (UID: \"643cd348-4af3-4720-af0d-e931f184742c\") " pod="kube-system/kube-proxy-kqrng"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.399082    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/643cd348-4af3-4720-af0d-e931f184742c-xtables-lock\") pod \"kube-proxy-kqrng\" (UID: \"643cd348-4af3-4720-af0d-e931f184742c\") " pod="kube-system/kube-proxy-kqrng"
	Nov 22 00:19:46 old-k8s-version-462319 kubelet[1521]: I1122 00:19:46.399117    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwtxn\" (UniqueName: \"kubernetes.io/projected/6bf161d2-c442-466d-98b8-c313a127bf22-kube-api-access-xwtxn\") pod \"kindnet-ldtd8\" (UID: \"6bf161d2-c442-466d-98b8-c313a127bf22\") " pod="kube-system/kindnet-ldtd8"
	Nov 22 00:19:47 old-k8s-version-462319 kubelet[1521]: I1122 00:19:47.509109    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kqrng" podStartSLOduration=1.509057216 podCreationTimestamp="2025-11-22 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:47.50894671 +0000 UTC m=+14.188238544" watchObservedRunningTime="2025-11-22 00:19:47.509057216 +0000 UTC m=+14.188349048"
	Nov 22 00:19:50 old-k8s-version-462319 kubelet[1521]: I1122 00:19:50.516088    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-ldtd8" podStartSLOduration=1.666002271 podCreationTimestamp="2025-11-22 00:19:46 +0000 UTC" firstStartedPulling="2025-11-22 00:19:47.157978554 +0000 UTC m=+13.837270379" lastFinishedPulling="2025-11-22 00:19:50.007957975 +0000 UTC m=+16.687249802" observedRunningTime="2025-11-22 00:19:50.515675934 +0000 UTC m=+17.194967778" watchObservedRunningTime="2025-11-22 00:19:50.515981694 +0000 UTC m=+17.195273528"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.709466    1521 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.889924    1521 topology_manager.go:215] "Topology Admit Handler" podUID="fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2" podNamespace="kube-system" podName="storage-provisioner"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.892871    1521 topology_manager.go:215] "Topology Admit Handler" podUID="44750e8d-5eeb-4845-9029-a58cbf976b62" podNamespace="kube-system" podName="coredns-5dd5756b68-pqbfp"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.993531    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44750e8d-5eeb-4845-9029-a58cbf976b62-config-volume\") pod \"coredns-5dd5756b68-pqbfp\" (UID: \"44750e8d-5eeb-4845-9029-a58cbf976b62\") " pod="kube-system/coredns-5dd5756b68-pqbfp"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.993597    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2-tmp\") pod \"storage-provisioner\" (UID: \"fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2\") " pod="kube-system/storage-provisioner"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.993637    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfnhk\" (UniqueName: \"kubernetes.io/projected/44750e8d-5eeb-4845-9029-a58cbf976b62-kube-api-access-pfnhk\") pod \"coredns-5dd5756b68-pqbfp\" (UID: \"44750e8d-5eeb-4845-9029-a58cbf976b62\") " pod="kube-system/coredns-5dd5756b68-pqbfp"
	Nov 22 00:20:00 old-k8s-version-462319 kubelet[1521]: I1122 00:20:00.993669    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2fz\" (UniqueName: \"kubernetes.io/projected/fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2-kube-api-access-rj2fz\") pod \"storage-provisioner\" (UID: \"fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2\") " pod="kube-system/storage-provisioner"
	Nov 22 00:20:01 old-k8s-version-462319 kubelet[1521]: I1122 00:20:01.564512    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.564413938 podCreationTimestamp="2025-11-22 00:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:01.564333027 +0000 UTC m=+28.243624860" watchObservedRunningTime="2025-11-22 00:20:01.564413938 +0000 UTC m=+28.243705771"
	Nov 22 00:20:01 old-k8s-version-462319 kubelet[1521]: I1122 00:20:01.564659    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pqbfp" podStartSLOduration=15.564629833 podCreationTimestamp="2025-11-22 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:01.551555332 +0000 UTC m=+28.230847165" watchObservedRunningTime="2025-11-22 00:20:01.564629833 +0000 UTC m=+28.243921660"
	Nov 22 00:20:04 old-k8s-version-462319 kubelet[1521]: I1122 00:20:04.775067    1521 topology_manager.go:215] "Topology Admit Handler" podUID="89dd9411-148d-4a8e-98d3-a51a8eab9d35" podNamespace="default" podName="busybox"
	Nov 22 00:20:04 old-k8s-version-462319 kubelet[1521]: I1122 00:20:04.915405    1521 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7gkx\" (UniqueName: \"kubernetes.io/projected/89dd9411-148d-4a8e-98d3-a51a8eab9d35-kube-api-access-l7gkx\") pod \"busybox\" (UID: \"89dd9411-148d-4a8e-98d3-a51a8eab9d35\") " pod="default/busybox"
	Nov 22 00:20:08 old-k8s-version-462319 kubelet[1521]: I1122 00:20:08.563800    1521 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.326082204 podCreationTimestamp="2025-11-22 00:20:04 +0000 UTC" firstStartedPulling="2025-11-22 00:20:06.067901148 +0000 UTC m=+32.747192973" lastFinishedPulling="2025-11-22 00:20:08.305570732 +0000 UTC m=+34.984862556" observedRunningTime="2025-11-22 00:20:08.563606355 +0000 UTC m=+35.242898188" watchObservedRunningTime="2025-11-22 00:20:08.563751787 +0000 UTC m=+35.243043620"
	
	
	==> storage-provisioner [f2a1ec178c227617bd32e678c94e3d44e606683f0b10ccdbc182dec6d6d5c9e9] <==
	I1122 00:20:01.401220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:20:01.412796       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:20:01.412842       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1122 00:20:01.421489       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:20:01.421683       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-462319_fbf5718a-3981-4828-8660-7b6ddab898c0!
	I1122 00:20:01.421619       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8be93cf-82a7-4f20-a2ea-927b67416b8f", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-462319_fbf5718a-3981-4828-8660-7b6ddab898c0 became leader
	I1122 00:20:01.522750       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-462319_fbf5718a-3981-4828-8660-7b6ddab898c0!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-462319 -n old-k8s-version-462319
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-462319 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (13.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-781232 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c9470f46-fa0e-479c-82bc-857ad36201bf] Pending
helpers_test.go:352: "busybox" [c9470f46-fa0e-479c-82bc-857ad36201bf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c9470f46-fa0e-479c-82bc-857ad36201bf] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003053215s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-781232 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-781232
helpers_test.go:243: (dbg) docker inspect no-preload-781232:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801",
	        "Created": "2025-11-22T00:19:23.714697998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251859,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:19:23.763938006Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801/hosts",
	        "LogPath": "/var/lib/docker/containers/e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801/e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801-json.log",
	        "Name": "/no-preload-781232",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-781232:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-781232",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801",
	                "LowerDir": "/var/lib/docker/overlay2/ffb802e3234af36569acaf9598f08ce82b2457943278e51bdea70ae4987b4b7e-init/diff:/var/lib/docker/overlay2/4b4af9a4e857911a6b5096aeeaee227ee7577c6eff3b08bbb4e765c49ed2fb70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ffb802e3234af36569acaf9598f08ce82b2457943278e51bdea70ae4987b4b7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ffb802e3234af36569acaf9598f08ce82b2457943278e51bdea70ae4987b4b7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ffb802e3234af36569acaf9598f08ce82b2457943278e51bdea70ae4987b4b7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-781232",
	                "Source": "/var/lib/docker/volumes/no-preload-781232/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-781232",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-781232",
	                "name.minikube.sigs.k8s.io": "no-preload-781232",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6cc2b0abcc4b89d4773e5b4ef90cec1849441e89ee9f2f96b3f073bacf5664b0",
	            "SandboxKey": "/var/run/docker/netns/6cc2b0abcc4b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-781232": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cca4389e0847032e4c18e18b7945e1c2646a84dee2b87d0f44df9d94c78a3170",
	                    "EndpointID": "5eb2eb6bb07c716470bc95040f0f020393f43f81701c3f040328b16c8328525a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f6:28:1a:7f:f0:68",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-781232",
	                        "e6866ff20d68"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-781232 -n no-preload-781232
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-781232 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-781232 logs -n 25: (1.120196499s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-687868 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo containerd config dump                                                                                                                                                                                                        │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo crio config                                                                                                                                                                                                                   │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ delete  │ -p cilium-687868                                                                                                                                                                                                                                    │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p old-k8s-version-462319 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ ssh     │ -p NoKubernetes-714059 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ start   │ -p cert-expiration-427330 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-427330 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ delete  │ -p cert-expiration-427330                                                                                                                                                                                                                           │ cert-expiration-427330 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p no-preload-781232 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ stop    │ -p NoKubernetes-714059                                                                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p NoKubernetes-714059 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ ssh     │ -p NoKubernetes-714059 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ delete  │ -p NoKubernetes-714059                                                                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ start   │ -p embed-certs-491677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-491677     │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-462319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ stop    │ -p old-k8s-version-462319 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:20:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:20:01.497017  260527 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:20:01.497324  260527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:01.497336  260527 out.go:374] Setting ErrFile to fd 2...
	I1122 00:20:01.497340  260527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:01.497588  260527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:20:01.498054  260527 out.go:368] Setting JSON to false
	I1122 00:20:01.499443  260527 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3740,"bootTime":1763767061,"procs":385,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:20:01.499503  260527 start.go:143] virtualization: kvm guest
	I1122 00:20:01.501458  260527 out.go:179] * [embed-certs-491677] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:20:01.503562  260527 notify.go:221] Checking for updates...
	I1122 00:20:01.503572  260527 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:20:01.505088  260527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:20:01.506758  260527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:20:01.508287  260527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1122 00:20:01.509699  260527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:20:01.511183  260527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:20:01.513382  260527 config.go:182] Loaded profile config "kubernetes-upgrade-882262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:01.513541  260527 config.go:182] Loaded profile config "no-preload-781232": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:01.513638  260527 config.go:182] Loaded profile config "old-k8s-version-462319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1122 00:20:01.513752  260527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:20:01.545401  260527 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:20:01.545504  260527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:01.611105  260527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-22 00:20:01.601298329 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:01.611234  260527 docker.go:319] overlay module found
	I1122 00:20:01.613226  260527 out.go:179] * Using the docker driver based on user configuration
	I1122 00:20:01.614649  260527 start.go:309] selected driver: docker
	I1122 00:20:01.614666  260527 start.go:930] validating driver "docker" against <nil>
	I1122 00:20:01.614677  260527 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:20:01.615350  260527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:01.674666  260527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:20:01.664354692 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:01.674876  260527 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:20:01.675176  260527 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:01.676975  260527 out.go:179] * Using Docker driver with root privileges
	I1122 00:20:01.678251  260527 cni.go:84] Creating CNI manager for ""
	I1122 00:20:01.678367  260527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:20:01.678383  260527 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:20:01.678481  260527 start.go:353] cluster config:
	{Name:embed-certs-491677 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:20:01.680036  260527 out.go:179] * Starting "embed-certs-491677" primary control-plane node in "embed-certs-491677" cluster
	I1122 00:20:01.683810  260527 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:20:01.685242  260527 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:20:01.686680  260527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:01.686729  260527 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1122 00:20:01.686743  260527 cache.go:65] Caching tarball of preloaded images
	I1122 00:20:01.686775  260527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:20:01.686916  260527 preload.go:238] Found /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 00:20:01.686942  260527 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1122 00:20:01.687116  260527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/config.json ...
	I1122 00:20:01.687148  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/config.json: {Name:mkf02d672882aad1c3b94e79745f8cf62e3f5b13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:01.708872  260527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:20:01.708897  260527 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:20:01.708914  260527 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:20:01.708943  260527 start.go:360] acquireMachinesLock for embed-certs-491677: {Name:mkbe59d49caffedca862a9ecb177d8d82196efdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:01.709044  260527 start.go:364] duration metric: took 84.98µs to acquireMachinesLock for "embed-certs-491677"
	I1122 00:20:01.709067  260527 start.go:93] Provisioning new machine with config: &{Name:embed-certs-491677 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:20:01.709131  260527 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:19:58.829298  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:19:58.829759  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:19:58.829815  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:19:58.829864  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:19:58.856999  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:19:58.857027  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:19:58.857033  218693 cri.go:89] found id: ""
	I1122 00:19:58.857044  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:19:58.857093  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.861107  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.865268  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:19:58.865337  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:19:58.892542  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:19:58.892564  218693 cri.go:89] found id: ""
	I1122 00:19:58.892572  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:19:58.892626  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.896771  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:19:58.896846  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:19:58.925628  218693 cri.go:89] found id: ""
	I1122 00:19:58.925652  218693 logs.go:282] 0 containers: []
	W1122 00:19:58.925660  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:19:58.925666  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:19:58.925724  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:19:58.955304  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:19:58.955326  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:19:58.955332  218693 cri.go:89] found id: ""
	I1122 00:19:58.955340  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:19:58.955397  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.959396  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.963562  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:19:58.963626  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:19:58.991860  218693 cri.go:89] found id: ""
	I1122 00:19:58.991883  218693 logs.go:282] 0 containers: []
	W1122 00:19:58.991890  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:19:58.991895  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:19:58.991949  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:19:59.020457  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:19:59.020483  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:19:59.020489  218693 cri.go:89] found id: ""
	I1122 00:19:59.020502  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:19:59.020550  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:59.024967  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:59.031778  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:19:59.031854  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:19:59.061726  218693 cri.go:89] found id: ""
	I1122 00:19:59.061752  218693 logs.go:282] 0 containers: []
	W1122 00:19:59.061763  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:19:59.061771  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:19:59.061831  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:19:59.089141  218693 cri.go:89] found id: ""
	I1122 00:19:59.089164  218693 logs.go:282] 0 containers: []
	W1122 00:19:59.089174  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:19:59.089185  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:19:59.089198  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:19:59.186417  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:19:59.186452  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:19:59.201060  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:19:59.201095  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:19:59.264254  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:19:59.264297  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:19:59.264313  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:19:59.303605  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:19:59.303643  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:19:59.358382  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:19:59.358425  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:19:59.398629  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:19:59.398669  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:19:59.449463  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:19:59.449505  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:19:59.487365  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:19:59.487403  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:19:59.526046  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:19:59.526080  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:19:59.562812  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:19:59.562843  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:19:59.594191  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:19:59.594230  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:02.129372  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:02.129923  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:02.130004  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:02.130071  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:02.161455  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:02.161484  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:02.161490  218693 cri.go:89] found id: ""
	I1122 00:20:02.161501  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:02.161563  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.165824  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.170451  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:02.170522  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:19:58.029853  251199 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-781232" context rescaled to 1 replicas
	W1122 00:19:59.529847  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:01.530493  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:00.520224  247021 node_ready.go:57] node "old-k8s-version-462319" has "Ready":"False" status (will retry)
	I1122 00:20:01.019651  247021 node_ready.go:49] node "old-k8s-version-462319" is "Ready"
	I1122 00:20:01.019681  247021 node_ready.go:38] duration metric: took 14.003330086s for node "old-k8s-version-462319" to be "Ready" ...
	I1122 00:20:01.019696  247021 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:01.019743  247021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:01.032926  247021 api_server.go:72] duration metric: took 14.481952557s to wait for apiserver process to appear ...
	I1122 00:20:01.032954  247021 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:01.032973  247021 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:20:01.039899  247021 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1122 00:20:01.041146  247021 api_server.go:141] control plane version: v1.28.0
	I1122 00:20:01.041172  247021 api_server.go:131] duration metric: took 8.212119ms to wait for apiserver health ...
	I1122 00:20:01.041191  247021 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:20:01.044815  247021 system_pods.go:59] 8 kube-system pods found
	I1122 00:20:01.044853  247021 system_pods.go:61] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.044862  247021 system_pods.go:61] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.044874  247021 system_pods.go:61] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.044879  247021 system_pods.go:61] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.044888  247021 system_pods.go:61] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.044897  247021 system_pods.go:61] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.044901  247021 system_pods.go:61] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.044909  247021 system_pods.go:61] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.044918  247021 system_pods.go:74] duration metric: took 3.718269ms to wait for pod list to return data ...
	I1122 00:20:01.044929  247021 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:20:01.047150  247021 default_sa.go:45] found service account: "default"
	I1122 00:20:01.047173  247021 default_sa.go:55] duration metric: took 2.236156ms for default service account to be created ...
	I1122 00:20:01.047182  247021 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:20:01.050474  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.050506  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.050514  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.050523  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.050528  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.050533  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.050539  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.050544  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.050551  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.050577  247021 retry.go:31] will retry after 205.575764ms: missing components: kube-dns
	I1122 00:20:01.261814  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.261847  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.261859  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.261865  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.261869  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.261873  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.261877  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.261879  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.261884  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.261900  247021 retry.go:31] will retry after 236.21482ms: missing components: kube-dns
	I1122 00:20:01.502877  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.502913  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.502921  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.502929  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.502935  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.502952  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.502957  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.502962  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.502984  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.503005  247021 retry.go:31] will retry after 442.873739ms: missing components: kube-dns
	I1122 00:20:01.950449  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.950483  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.950492  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.950500  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.950505  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.950516  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.950521  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.950526  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.950530  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Running
	I1122 00:20:01.950541  247021 system_pods.go:126] duration metric: took 903.352039ms to wait for k8s-apps to be running ...
	I1122 00:20:01.950553  247021 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:20:01.950602  247021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:01.964580  247021 system_svc.go:56] duration metric: took 14.015441ms WaitForService to wait for kubelet
	I1122 00:20:01.964612  247021 kubeadm.go:587] duration metric: took 15.413644993s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:01.964634  247021 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:20:01.968157  247021 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:20:01.968185  247021 node_conditions.go:123] node cpu capacity is 8
	I1122 00:20:01.968205  247021 node_conditions.go:105] duration metric: took 3.565831ms to run NodePressure ...
	I1122 00:20:01.968227  247021 start.go:242] waiting for startup goroutines ...
	I1122 00:20:01.968237  247021 start.go:247] waiting for cluster config update ...
	I1122 00:20:01.968254  247021 start.go:256] writing updated cluster config ...
	I1122 00:20:01.968545  247021 ssh_runner.go:195] Run: rm -f paused
	I1122 00:20:01.972712  247021 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:01.976920  247021 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pqbfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.983354  247021 pod_ready.go:94] pod "coredns-5dd5756b68-pqbfp" is "Ready"
	I1122 00:20:02.983385  247021 pod_ready.go:86] duration metric: took 1.00643947s for pod "coredns-5dd5756b68-pqbfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.987209  247021 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.992024  247021 pod_ready.go:94] pod "etcd-old-k8s-version-462319" is "Ready"
	I1122 00:20:02.992053  247021 pod_ready.go:86] duration metric: took 4.821819ms for pod "etcd-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.994875  247021 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.998765  247021 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-462319" is "Ready"
	I1122 00:20:02.998789  247021 pod_ready.go:86] duration metric: took 3.892836ms for pod "kube-apiserver-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.001798  247021 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.181579  247021 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-462319" is "Ready"
	I1122 00:20:03.181611  247021 pod_ready.go:86] duration metric: took 179.791243ms for pod "kube-controller-manager-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.381883  247021 pod_ready.go:83] waiting for pod "kube-proxy-kqrng" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.781562  247021 pod_ready.go:94] pod "kube-proxy-kqrng" is "Ready"
	I1122 00:20:03.781594  247021 pod_ready.go:86] duration metric: took 399.684082ms for pod "kube-proxy-kqrng" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.981736  247021 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:04.381559  247021 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-462319" is "Ready"
	I1122 00:20:04.381590  247021 pod_ready.go:86] duration metric: took 399.825883ms for pod "kube-scheduler-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:04.381604  247021 pod_ready.go:40] duration metric: took 2.408861294s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:04.431804  247021 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1122 00:20:04.435233  247021 out.go:203] 
	W1122 00:20:04.436473  247021 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1122 00:20:04.437863  247021 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1122 00:20:04.439555  247021 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-462319" cluster and "default" namespace by default
	I1122 00:20:01.711315  260527 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:20:01.711555  260527 start.go:159] libmachine.API.Create for "embed-certs-491677" (driver="docker")
	I1122 00:20:01.711610  260527 client.go:173] LocalClient.Create starting
	I1122 00:20:01.711685  260527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem
	I1122 00:20:01.711719  260527 main.go:143] libmachine: Decoding PEM data...
	I1122 00:20:01.711737  260527 main.go:143] libmachine: Parsing certificate...
	I1122 00:20:01.711816  260527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem
	I1122 00:20:01.711837  260527 main.go:143] libmachine: Decoding PEM data...
	I1122 00:20:01.711846  260527 main.go:143] libmachine: Parsing certificate...
	I1122 00:20:01.712184  260527 cli_runner.go:164] Run: docker network inspect embed-certs-491677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:20:01.730686  260527 cli_runner.go:211] docker network inspect embed-certs-491677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:20:01.730752  260527 network_create.go:284] running [docker network inspect embed-certs-491677] to gather additional debugging logs...
	I1122 00:20:01.730771  260527 cli_runner.go:164] Run: docker network inspect embed-certs-491677
	W1122 00:20:01.749708  260527 cli_runner.go:211] docker network inspect embed-certs-491677 returned with exit code 1
	I1122 00:20:01.749739  260527 network_create.go:287] error running [docker network inspect embed-certs-491677]: docker network inspect embed-certs-491677: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-491677 not found
	I1122 00:20:01.749755  260527 network_create.go:289] output of [docker network inspect embed-certs-491677]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-491677 not found
	
	** /stderr **
	I1122 00:20:01.749902  260527 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:20:01.769006  260527 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1df6c22ede91 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:c7:f4:a5:24:54} reservation:<nil>}
	I1122 00:20:01.769731  260527 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7d48551462a8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:3b:0e:74:ee:57} reservation:<nil>}
	I1122 00:20:01.770416  260527 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c50004b7f5b6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:73:1e:0d:b7:11} reservation:<nil>}
	I1122 00:20:01.771113  260527 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-166d2f324fb5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:da:99:1e:87:6f} reservation:<nil>}
	I1122 00:20:01.771891  260527 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ebca10}
	I1122 00:20:01.771919  260527 network_create.go:124] attempt to create docker network embed-certs-491677 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1122 00:20:01.771970  260527 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-491677 embed-certs-491677
	I1122 00:20:01.823460  260527 network_create.go:108] docker network embed-certs-491677 192.168.85.0/24 created
	I1122 00:20:01.823495  260527 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-491677" container
	I1122 00:20:01.823677  260527 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:20:01.843300  260527 cli_runner.go:164] Run: docker volume create embed-certs-491677 --label name.minikube.sigs.k8s.io=embed-certs-491677 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:20:01.863723  260527 oci.go:103] Successfully created a docker volume embed-certs-491677
	I1122 00:20:01.863797  260527 cli_runner.go:164] Run: docker run --rm --name embed-certs-491677-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-491677 --entrypoint /usr/bin/test -v embed-certs-491677:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:20:02.270865  260527 oci.go:107] Successfully prepared a docker volume embed-certs-491677
	I1122 00:20:02.270965  260527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:02.270986  260527 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:20:02.271058  260527 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-491677:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:20:02.204729  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:02.204756  218693 cri.go:89] found id: ""
	I1122 00:20:02.204766  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:02.204829  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.209535  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:02.209603  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:02.247383  218693 cri.go:89] found id: ""
	I1122 00:20:02.247408  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.247416  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:02.247422  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:02.247484  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:02.277440  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:02.277466  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:02.277473  218693 cri.go:89] found id: ""
	I1122 00:20:02.277483  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:02.277545  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.282049  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.286514  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:02.286581  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:02.316706  218693 cri.go:89] found id: ""
	I1122 00:20:02.316733  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.316744  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:02.316753  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:02.316813  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:02.347451  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:02.347471  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:02.347476  218693 cri.go:89] found id: ""
	I1122 00:20:02.347486  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:02.347542  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.352378  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.356502  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:02.356561  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:02.384778  218693 cri.go:89] found id: ""
	I1122 00:20:02.384802  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.384814  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:02.384825  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:02.384887  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:02.421102  218693 cri.go:89] found id: ""
	I1122 00:20:02.421131  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.421143  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:02.421156  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:02.421171  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:02.477880  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:02.477924  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:02.574856  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:02.574892  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:02.641120  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:02.641142  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:02.641154  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:02.681648  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:02.681686  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:02.739093  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:02.739128  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:02.774358  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:02.774395  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:02.810272  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:02.810310  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:02.842900  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:02.842942  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:02.857743  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:02.857784  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:02.894229  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:02.894272  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:02.929523  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:02.929555  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:05.459958  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:05.460532  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:05.460597  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:05.460676  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:05.488636  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:05.488658  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:05.488662  218693 cri.go:89] found id: ""
	I1122 00:20:05.488670  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:05.488715  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.492971  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.496804  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:05.496876  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:05.524856  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:05.524883  218693 cri.go:89] found id: ""
	I1122 00:20:05.524902  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:05.524962  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.529434  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:05.529521  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:05.557780  218693 cri.go:89] found id: ""
	I1122 00:20:05.557805  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.557819  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:05.557828  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:05.557885  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:05.586142  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:05.586166  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:05.586173  218693 cri.go:89] found id: ""
	I1122 00:20:05.586184  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:05.586248  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.590458  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.594671  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:05.594752  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:05.623542  218693 cri.go:89] found id: ""
	I1122 00:20:05.623565  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.623575  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:05.623585  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:05.623653  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:05.651642  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:05.651663  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:05.651666  218693 cri.go:89] found id: ""
	I1122 00:20:05.651674  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:05.651724  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.655785  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.659668  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:05.659743  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:05.687725  218693 cri.go:89] found id: ""
	I1122 00:20:05.687748  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.687756  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:05.687762  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:05.687810  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:05.714403  218693 cri.go:89] found id: ""
	I1122 00:20:05.714432  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.714444  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:05.714457  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:05.714472  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:05.748851  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:05.748901  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:05.784862  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:05.784899  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:05.813532  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:05.813569  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:05.844930  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:05.844965  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:05.897273  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:05.897337  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:05.935381  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:05.935417  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:06.025566  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:06.025612  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:06.040810  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:06.040843  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:06.102006  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:06.102032  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:06.102050  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:06.136887  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:06.136937  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:06.192634  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:06.192674  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	W1122 00:20:04.029159  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:06.067087  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	I1122 00:20:06.722373  260527 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-491677:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.451238931s)
	I1122 00:20:06.722412  260527 kic.go:203] duration metric: took 4.451422839s to extract preloaded images to volume ...
	W1122 00:20:06.722533  260527 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:20:06.722570  260527 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:20:06.722615  260527 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:20:06.782296  260527 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-491677 --name embed-certs-491677 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-491677 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-491677 --network embed-certs-491677 --ip 192.168.85.2 --volume embed-certs-491677:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:20:07.109552  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Running}}
	I1122 00:20:07.129178  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Status}}
	I1122 00:20:07.148399  260527 cli_runner.go:164] Run: docker exec embed-certs-491677 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:20:07.196229  260527 oci.go:144] the created container "embed-certs-491677" has a running status.
	I1122 00:20:07.196362  260527 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa...
	I1122 00:20:07.257446  260527 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:20:07.289218  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Status}}
	I1122 00:20:07.310559  260527 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:20:07.310578  260527 kic_runner.go:114] Args: [docker exec --privileged embed-certs-491677 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:20:07.351585  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Status}}
	I1122 00:20:07.374469  260527 machine.go:94] provisionDockerMachine start ...
	I1122 00:20:07.374754  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:07.397641  260527 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:07.397885  260527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:20:07.397902  260527 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:20:07.398578  260527 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36770->127.0.0.1:33073: read: connection reset by peer
	I1122 00:20:10.523553  260527 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-491677
	
	I1122 00:20:10.523587  260527 ubuntu.go:182] provisioning hostname "embed-certs-491677"
	I1122 00:20:10.523652  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:10.544251  260527 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:10.544519  260527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:20:10.544536  260527 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-491677 && echo "embed-certs-491677" | sudo tee /etc/hostname
	I1122 00:20:10.679747  260527 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-491677
	
	I1122 00:20:10.679832  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:10.700586  260527 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:10.700833  260527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:20:10.700858  260527 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-491677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-491677/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-491677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:20:10.825289  260527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:20:10.825326  260527 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9059/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9059/.minikube}
	I1122 00:20:10.825375  260527 ubuntu.go:190] setting up certificates
	I1122 00:20:10.825411  260527 provision.go:84] configureAuth start
	I1122 00:20:10.825489  260527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-491677
	I1122 00:20:10.844220  260527 provision.go:143] copyHostCerts
	I1122 00:20:10.844298  260527 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem, removing ...
	I1122 00:20:10.844307  260527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem
	I1122 00:20:10.844403  260527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem (1082 bytes)
	I1122 00:20:10.844496  260527 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem, removing ...
	I1122 00:20:10.844506  260527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem
	I1122 00:20:10.844532  260527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem (1123 bytes)
	I1122 00:20:10.844590  260527 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem, removing ...
	I1122 00:20:10.844598  260527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem
	I1122 00:20:10.844620  260527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem (1679 bytes)
	I1122 00:20:10.844669  260527 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem org=jenkins.embed-certs-491677 san=[127.0.0.1 192.168.85.2 embed-certs-491677 localhost minikube]
	I1122 00:20:10.881095  260527 provision.go:177] copyRemoteCerts
	I1122 00:20:10.881150  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:20:10.881198  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:10.899974  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:10.993091  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:20:11.014763  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1122 00:20:11.034702  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:20:11.053678  260527 provision.go:87] duration metric: took 228.246896ms to configureAuth
	I1122 00:20:11.053708  260527 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:20:11.053892  260527 config.go:182] Loaded profile config "embed-certs-491677": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:11.053909  260527 machine.go:97] duration metric: took 3.67941396s to provisionDockerMachine
	I1122 00:20:11.053917  260527 client.go:176] duration metric: took 9.342299036s to LocalClient.Create
	I1122 00:20:11.053943  260527 start.go:167] duration metric: took 9.342388491s to libmachine.API.Create "embed-certs-491677"
	I1122 00:20:11.053956  260527 start.go:293] postStartSetup for "embed-certs-491677" (driver="docker")
	I1122 00:20:11.053984  260527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:20:11.054052  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:20:11.054103  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.073167  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:11.168158  260527 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:20:11.172076  260527 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:20:11.172422  260527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:20:11.172459  260527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/addons for local assets ...
	I1122 00:20:11.172556  260527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/files for local assets ...
	I1122 00:20:11.172675  260527 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem -> 145302.pem in /etc/ssl/certs
	I1122 00:20:11.172811  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:20:11.182207  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:20:11.203784  260527 start.go:296] duration metric: took 149.811059ms for postStartSetup
	I1122 00:20:11.204173  260527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-491677
	I1122 00:20:11.222954  260527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/config.json ...
	I1122 00:20:11.223305  260527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:20:11.223354  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.242018  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:11.333726  260527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:20:11.338527  260527 start.go:128] duration metric: took 9.62936097s to createHost
	I1122 00:20:11.338558  260527 start.go:83] releasing machines lock for "embed-certs-491677", held for 9.629502399s
	I1122 00:20:11.338631  260527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-491677
	I1122 00:20:11.357563  260527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:20:11.357634  260527 ssh_runner.go:195] Run: cat /version.json
	I1122 00:20:11.357684  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.357690  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.377098  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:11.378067  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:08.727161  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:08.727652  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:08.727710  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:08.727762  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:08.754498  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:08.754522  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:08.754527  218693 cri.go:89] found id: ""
	I1122 00:20:08.754535  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:08.754583  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.758867  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.762449  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:08.762501  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:08.788422  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:08.788444  218693 cri.go:89] found id: ""
	I1122 00:20:08.788455  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:08.788512  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.792603  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:08.792668  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:08.820677  218693 cri.go:89] found id: ""
	I1122 00:20:08.820703  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.820711  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:08.820717  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:08.820769  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:08.848396  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:08.848418  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:08.848422  218693 cri.go:89] found id: ""
	I1122 00:20:08.848429  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:08.848485  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.852633  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.856393  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:08.856469  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:08.884423  218693 cri.go:89] found id: ""
	I1122 00:20:08.884454  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.884467  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:08.884476  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:08.884529  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:08.911898  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:08.911917  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:08.911921  218693 cri.go:89] found id: ""
	I1122 00:20:08.911928  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:08.912000  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.916097  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.919808  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:08.919868  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:08.945704  218693 cri.go:89] found id: ""
	I1122 00:20:08.945731  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.945742  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:08.945750  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:08.945811  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:08.971599  218693 cri.go:89] found id: ""
	I1122 00:20:08.971630  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.971642  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:08.971658  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:08.971686  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:08.985779  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:08.985806  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:09.018373  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:09.018407  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:09.055328  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:09.055359  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:09.098567  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:09.098608  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:09.183392  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:09.183433  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:09.242636  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:09.242654  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:09.242666  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:09.276133  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:09.276179  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:09.310731  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:09.310769  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:09.362187  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:09.362226  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:09.391737  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:09.391763  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:09.425753  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:09.425787  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:11.959328  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:11.959805  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:11.959868  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:11.959935  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:11.993113  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:11.993137  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:11.993143  218693 cri.go:89] found id: ""
	I1122 00:20:11.993153  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:11.993213  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:11.997946  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.002616  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:12.002741  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:12.040113  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:12.040150  218693 cri.go:89] found id: ""
	I1122 00:20:12.040160  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:12.040220  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.045665  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:12.045732  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:12.081343  218693 cri.go:89] found id: ""
	I1122 00:20:12.081375  218693 logs.go:282] 0 containers: []
	W1122 00:20:12.081384  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:12.081389  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:12.081449  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:12.116486  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:12.117024  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:12.117045  218693 cri.go:89] found id: ""
	I1122 00:20:12.117055  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:12.117115  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.121469  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.125453  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:12.125520  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:12.159076  218693 cri.go:89] found id: ""
	I1122 00:20:12.159108  218693 logs.go:282] 0 containers: []
	W1122 00:20:12.159121  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:12.159130  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:12.159191  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:11.523900  260527 ssh_runner.go:195] Run: systemctl --version
	I1122 00:20:11.531084  260527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:20:11.536010  260527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:20:11.536130  260527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:20:11.563766  260527 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:20:11.563792  260527 start.go:496] detecting cgroup driver to use...
	I1122 00:20:11.563830  260527 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:20:11.563873  260527 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:20:11.579543  260527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:20:11.593598  260527 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:20:11.593666  260527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:20:11.610889  260527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:20:11.629723  260527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:20:11.730670  260527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:20:11.819921  260527 docker.go:234] disabling docker service ...
	I1122 00:20:11.819985  260527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:20:11.839159  260527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:20:11.854142  260527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:20:11.943699  260527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:20:12.053855  260527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:20:12.073171  260527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:20:12.089999  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:20:12.105012  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:20:12.117591  260527 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1122 00:20:12.117652  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1122 00:20:12.128817  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:20:12.142147  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:20:12.154635  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:20:12.169029  260527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:20:12.181631  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:20:12.194568  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:20:12.207294  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:20:12.218684  260527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:20:12.228679  260527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:20:12.241707  260527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:20:12.337447  260527 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:20:12.443801  260527 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:20:12.443870  260527 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:20:12.448114  260527 start.go:564] Will wait 60s for crictl version
	I1122 00:20:12.448178  260527 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.452113  260527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:20:12.481619  260527 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:20:12.481687  260527 ssh_runner.go:195] Run: containerd --version
	I1122 00:20:12.506954  260527 ssh_runner.go:195] Run: containerd --version
	I1122 00:20:12.537127  260527 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1122 00:20:08.528688  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:10.529626  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	I1122 00:20:12.029744  251199 node_ready.go:49] node "no-preload-781232" is "Ready"
	I1122 00:20:12.029782  251199 node_ready.go:38] duration metric: took 14.503754974s for node "no-preload-781232" to be "Ready" ...
	I1122 00:20:12.029799  251199 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:12.029867  251199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:12.049755  251199 api_server.go:72] duration metric: took 14.826557708s to wait for apiserver process to appear ...
	I1122 00:20:12.049782  251199 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:12.049803  251199 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1122 00:20:12.055733  251199 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1122 00:20:12.057374  251199 api_server.go:141] control plane version: v1.34.1
	I1122 00:20:12.057405  251199 api_server.go:131] duration metric: took 7.61544ms to wait for apiserver health ...
	I1122 00:20:12.057416  251199 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:20:12.062154  251199 system_pods.go:59] 8 kube-system pods found
	I1122 00:20:12.062190  251199 system_pods.go:61] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:12.062199  251199 system_pods.go:61] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.062207  251199 system_pods.go:61] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.062212  251199 system_pods.go:61] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.062218  251199 system_pods.go:61] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.062223  251199 system_pods.go:61] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.062228  251199 system_pods.go:61] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.062237  251199 system_pods.go:61] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:12.062245  251199 system_pods.go:74] duration metric: took 4.821603ms to wait for pod list to return data ...
	I1122 00:20:12.062254  251199 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:20:12.065112  251199 default_sa.go:45] found service account: "default"
	I1122 00:20:12.065138  251199 default_sa.go:55] duration metric: took 2.848928ms for default service account to be created ...
	I1122 00:20:12.065149  251199 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:20:12.069582  251199 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:12.069625  251199 system_pods.go:89] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:12.069633  251199 system_pods.go:89] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.069648  251199 system_pods.go:89] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.069655  251199 system_pods.go:89] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.069661  251199 system_pods.go:89] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.069666  251199 system_pods.go:89] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.069670  251199 system_pods.go:89] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.069676  251199 system_pods.go:89] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:12.069728  251199 retry.go:31] will retry after 227.269849ms: missing components: kube-dns
	I1122 00:20:12.301834  251199 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:12.301869  251199 system_pods.go:89] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:12.301877  251199 system_pods.go:89] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.301886  251199 system_pods.go:89] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.301892  251199 system_pods.go:89] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.301898  251199 system_pods.go:89] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.301903  251199 system_pods.go:89] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.301910  251199 system_pods.go:89] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.301917  251199 system_pods.go:89] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:12.301938  251199 retry.go:31] will retry after 387.887736ms: missing components: kube-dns
	I1122 00:20:12.694992  251199 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:12.695026  251199 system_pods.go:89] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Running
	I1122 00:20:12.695035  251199 system_pods.go:89] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.695041  251199 system_pods.go:89] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.695047  251199 system_pods.go:89] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.695052  251199 system_pods.go:89] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.695060  251199 system_pods.go:89] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.695065  251199 system_pods.go:89] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.695070  251199 system_pods.go:89] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Running
	I1122 00:20:12.695080  251199 system_pods.go:126] duration metric: took 629.924123ms to wait for k8s-apps to be running ...
	I1122 00:20:12.695093  251199 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:20:12.695144  251199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:12.708823  251199 system_svc.go:56] duration metric: took 13.721013ms WaitForService to wait for kubelet
	I1122 00:20:12.708855  251199 kubeadm.go:587] duration metric: took 15.485663176s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:12.708874  251199 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:20:12.712345  251199 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:20:12.712376  251199 node_conditions.go:123] node cpu capacity is 8
	I1122 00:20:12.712396  251199 node_conditions.go:105] duration metric: took 3.516354ms to run NodePressure ...
	I1122 00:20:12.712412  251199 start.go:242] waiting for startup goroutines ...
	I1122 00:20:12.712423  251199 start.go:247] waiting for cluster config update ...
	I1122 00:20:12.712441  251199 start.go:256] writing updated cluster config ...
	I1122 00:20:12.712733  251199 ssh_runner.go:195] Run: rm -f paused
	I1122 00:20:12.717390  251199 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:12.721696  251199 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9wcct" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.726947  251199 pod_ready.go:94] pod "coredns-66bc5c9577-9wcct" is "Ready"
	I1122 00:20:12.726976  251199 pod_ready.go:86] duration metric: took 5.255643ms for pod "coredns-66bc5c9577-9wcct" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.729559  251199 pod_ready.go:83] waiting for pod "etcd-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.734425  251199 pod_ready.go:94] pod "etcd-no-preload-781232" is "Ready"
	I1122 00:20:12.734455  251199 pod_ready.go:86] duration metric: took 4.86467ms for pod "etcd-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.736916  251199 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.741485  251199 pod_ready.go:94] pod "kube-apiserver-no-preload-781232" is "Ready"
	I1122 00:20:12.741515  251199 pod_ready.go:86] duration metric: took 4.574913ms for pod "kube-apiserver-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.743848  251199 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:13.121924  251199 pod_ready.go:94] pod "kube-controller-manager-no-preload-781232" is "Ready"
	I1122 00:20:13.121957  251199 pod_ready.go:86] duration metric: took 378.084436ms for pod "kube-controller-manager-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:13.322463  251199 pod_ready.go:83] waiting for pod "kube-proxy-685jg" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:13.721973  251199 pod_ready.go:94] pod "kube-proxy-685jg" is "Ready"
	I1122 00:20:13.722003  251199 pod_ready.go:86] duration metric: took 399.513258ms for pod "kube-proxy-685jg" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:13.922497  251199 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:14.322798  251199 pod_ready.go:94] pod "kube-scheduler-no-preload-781232" is "Ready"
	I1122 00:20:14.322835  251199 pod_ready.go:86] duration metric: took 400.307889ms for pod "kube-scheduler-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:14.322851  251199 pod_ready.go:40] duration metric: took 1.605427799s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:14.392629  251199 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:20:14.394856  251199 out.go:179] * Done! kubectl is now configured to use "no-preload-781232" cluster and "default" namespace by default
	I1122 00:20:12.541500  260527 cli_runner.go:164] Run: docker network inspect embed-certs-491677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:20:12.574015  260527 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:20:12.578297  260527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:20:12.589491  260527 kubeadm.go:884] updating cluster {Name:embed-certs-491677 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:20:12.589632  260527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:12.589697  260527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:20:12.617010  260527 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:20:12.617037  260527 containerd.go:534] Images already preloaded, skipping extraction
	I1122 00:20:12.617098  260527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:20:12.644125  260527 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:20:12.644148  260527 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:20:12.644157  260527 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1122 00:20:12.644310  260527 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-491677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:20:12.644388  260527 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:20:12.673869  260527 cni.go:84] Creating CNI manager for ""
	I1122 00:20:12.673899  260527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:20:12.673919  260527 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:20:12.673948  260527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-491677 NodeName:embed-certs-491677 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:20:12.674142  260527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-491677"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:20:12.674219  260527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:20:12.683635  260527 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:20:12.683710  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:20:12.692341  260527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1122 00:20:12.708136  260527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:20:12.727111  260527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1122 00:20:12.743788  260527 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:20:12.747754  260527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:20:12.758812  260527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:20:12.844867  260527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:20:12.869740  260527 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677 for IP: 192.168.85.2
	I1122 00:20:12.869763  260527 certs.go:195] generating shared ca certs ...
	I1122 00:20:12.869790  260527 certs.go:227] acquiring lock for ca certs: {Name:mkcee17f48cab2703d4de8a78a6fb8af44d9e7e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:12.869989  260527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.key
	I1122 00:20:12.870065  260527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.key
	I1122 00:20:12.870084  260527 certs.go:257] generating profile certs ...
	I1122 00:20:12.870146  260527 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/client.key
	I1122 00:20:12.870166  260527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/client.crt with IP's: []
	I1122 00:20:12.908186  260527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/client.crt ...
	I1122 00:20:12.908216  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/client.crt: {Name:mk8704ecde753d7119b44ed45cfda92e5dc05630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:12.908420  260527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/client.key ...
	I1122 00:20:12.908436  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/client.key: {Name:mkb2d6bf770bf45b16a4eca78c32fdcff2885211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:12.908547  260527 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.key.c79253ad
	I1122 00:20:12.908570  260527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.crt.c79253ad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1122 00:20:13.019354  260527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.crt.c79253ad ...
	I1122 00:20:13.019392  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.crt.c79253ad: {Name:mk1762d9d01731b3cbac46975805ab095bb2b8bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:13.019599  260527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.key.c79253ad ...
	I1122 00:20:13.019618  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.key.c79253ad: {Name:mk75d2f3b968084584154e473183ab1de1ddfdef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:13.019739  260527 certs.go:382] copying /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.crt.c79253ad -> /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.crt
	I1122 00:20:13.019842  260527 certs.go:386] copying /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.key.c79253ad -> /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.key
	I1122 00:20:13.019938  260527 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.key
	I1122 00:20:13.019956  260527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.crt with IP's: []
	I1122 00:20:13.050653  260527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.crt ...
	I1122 00:20:13.050681  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.crt: {Name:mka897102b38131787dec19ca98371262dbbfbff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:13.050873  260527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.key ...
	I1122 00:20:13.050902  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.key: {Name:mk16d21cb9e06711fe89c5e2d2bb5e78642dddf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:13.051132  260527 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530.pem (1338 bytes)
	W1122 00:20:13.051181  260527 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530_empty.pem, impossibly tiny 0 bytes
	I1122 00:20:13.051197  260527 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem (1675 bytes)
	I1122 00:20:13.051233  260527 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem (1082 bytes)
	I1122 00:20:13.051277  260527 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:20:13.051314  260527 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem (1679 bytes)
	I1122 00:20:13.051374  260527 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:20:13.051960  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:20:13.070735  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:20:13.090249  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:20:13.108597  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1122 00:20:13.128028  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1122 00:20:13.147582  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:20:13.165509  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:20:13.183679  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:20:13.202761  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:20:13.225144  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530.pem --> /usr/share/ca-certificates/14530.pem (1338 bytes)
	I1122 00:20:13.243856  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /usr/share/ca-certificates/145302.pem (1708 bytes)
	I1122 00:20:13.263152  260527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:20:13.277050  260527 ssh_runner.go:195] Run: openssl version
	I1122 00:20:13.283706  260527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:20:13.294163  260527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:20:13.298425  260527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:20:13.298493  260527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:20:13.335091  260527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:20:13.344437  260527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14530.pem && ln -fs /usr/share/ca-certificates/14530.pem /etc/ssl/certs/14530.pem"
	I1122 00:20:13.354368  260527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14530.pem
	I1122 00:20:13.358613  260527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14530.pem
	I1122 00:20:13.358673  260527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14530.pem
	I1122 00:20:13.393614  260527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14530.pem /etc/ssl/certs/51391683.0"
	I1122 00:20:13.403768  260527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145302.pem && ln -fs /usr/share/ca-certificates/145302.pem /etc/ssl/certs/145302.pem"
	I1122 00:20:13.412603  260527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145302.pem
	I1122 00:20:13.416857  260527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145302.pem
	I1122 00:20:13.416924  260527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145302.pem
	I1122 00:20:13.454565  260527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145302.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:20:13.464818  260527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:20:13.468886  260527 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:20:13.468942  260527 kubeadm.go:401] StartCluster: {Name:embed-certs-491677 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:20:13.469046  260527 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:20:13.469089  260527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:20:13.496549  260527 cri.go:89] found id: ""
	I1122 00:20:13.496613  260527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:20:13.505745  260527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:20:13.515197  260527 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:20:13.515253  260527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:20:13.524576  260527 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:20:13.524596  260527 kubeadm.go:158] found existing configuration files:
	
	I1122 00:20:13.524646  260527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:20:13.533544  260527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:20:13.533603  260527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:20:13.542351  260527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:20:13.552273  260527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:20:13.552347  260527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:20:13.562028  260527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:20:13.571876  260527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:20:13.571926  260527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:20:13.582394  260527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:20:13.591183  260527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:20:13.591246  260527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:20:13.600121  260527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:20:13.660570  260527 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1122 00:20:13.719464  260527 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:20:12.194554  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:12.194580  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:12.194586  218693 cri.go:89] found id: ""
	I1122 00:20:12.194597  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:12.194653  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.200688  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.205547  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:12.205617  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:12.243134  218693 cri.go:89] found id: ""
	I1122 00:20:12.243161  218693 logs.go:282] 0 containers: []
	W1122 00:20:12.243171  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:12.243181  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:12.243239  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:12.271092  218693 cri.go:89] found id: ""
	I1122 00:20:12.271125  218693 logs.go:282] 0 containers: []
	W1122 00:20:12.271137  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:12.271149  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:12.271168  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:12.310696  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:12.310725  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:12.367453  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:12.367497  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:12.401777  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:12.401820  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:12.437519  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:12.437557  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:12.543639  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:12.543674  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:12.570582  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:12.570613  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:12.633684  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:12.633704  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:12.633716  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:12.667421  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:12.667454  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:12.703894  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:12.703924  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:12.736729  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:12.736764  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:12.771593  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:12.771626  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:15.325334  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:15.325674  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:15.325737  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:15.325785  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:15.360483  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:15.360505  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:15.360519  218693 cri.go:89] found id: ""
	I1122 00:20:15.360536  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:15.360596  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.365000  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.369124  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:15.369192  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:15.400520  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:15.400545  218693 cri.go:89] found id: ""
	I1122 00:20:15.400556  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:15.400615  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.405111  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:15.405188  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:15.440253  218693 cri.go:89] found id: ""
	I1122 00:20:15.440297  218693 logs.go:282] 0 containers: []
	W1122 00:20:15.440308  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:15.440317  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:15.440381  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:15.475042  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:15.475067  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:15.475073  218693 cri.go:89] found id: ""
	I1122 00:20:15.475082  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:15.475143  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.479941  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.484606  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:15.484676  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:15.518209  218693 cri.go:89] found id: ""
	I1122 00:20:15.518299  218693 logs.go:282] 0 containers: []
	W1122 00:20:15.518314  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:15.518323  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:15.518397  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:15.549238  218693 cri.go:89] found id: "718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:15.549298  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:15.549306  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:15.549311  218693 cri.go:89] found id: ""
	I1122 00:20:15.549321  218693 logs.go:282] 3 containers: [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:15.549409  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.554575  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.559690  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.564140  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:15.564212  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:15.593979  218693 cri.go:89] found id: ""
	I1122 00:20:15.594001  218693 logs.go:282] 0 containers: []
	W1122 00:20:15.594009  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:15.594016  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:15.594076  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:15.621716  218693 cri.go:89] found id: ""
	I1122 00:20:15.621740  218693 logs.go:282] 0 containers: []
	W1122 00:20:15.621751  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:15.621763  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:15.621777  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:15.635879  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:15.635908  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:15.700277  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:15.700302  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:15.700368  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:15.744118  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:15.744151  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:15.804869  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:15.804914  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:15.852799  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:15.852837  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:15.886163  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:15.886199  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:15.922695  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:15.922727  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:15.974295  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:15.974327  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:16.072397  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:16.072432  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:16.107409  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:16.107443  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:16.140406  218693 logs.go:123] Gathering logs for kube-controller-manager [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f] ...
	I1122 00:20:16.140442  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:16.176750  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:16.176792  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:18.717355  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:18.717807  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:18.717881  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:18.717941  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:18.769197  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:18.769228  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:18.769235  218693 cri.go:89] found id: ""
	I1122 00:20:18.769244  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:18.769347  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.777815  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.783829  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:18.783910  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:18.824794  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:18.824816  218693 cri.go:89] found id: ""
	I1122 00:20:18.824826  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:18.824884  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.829608  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:18.829692  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:18.865927  218693 cri.go:89] found id: ""
	I1122 00:20:18.865964  218693 logs.go:282] 0 containers: []
	W1122 00:20:18.865977  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:18.865985  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:18.866042  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:18.899699  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:18.899718  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:18.899722  218693 cri.go:89] found id: ""
	I1122 00:20:18.899730  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:18.899775  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.904742  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.910347  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:18.910428  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:18.943667  218693 cri.go:89] found id: ""
	I1122 00:20:18.943693  218693 logs.go:282] 0 containers: []
	W1122 00:20:18.943702  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:18.943710  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:18.943775  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:18.979450  218693 cri.go:89] found id: "718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:18.979488  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:18.979496  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:18.979502  218693 cri.go:89] found id: ""
	I1122 00:20:18.979512  218693 logs.go:282] 3 containers: [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:18.979585  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.984932  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.989393  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.996874  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:18.996940  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:19.045639  218693 cri.go:89] found id: ""
	I1122 00:20:19.045665  218693 logs.go:282] 0 containers: []
	W1122 00:20:19.045683  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:19.045691  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:19.045746  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:19.082793  218693 cri.go:89] found id: ""
	I1122 00:20:19.082818  218693 logs.go:282] 0 containers: []
	W1122 00:20:19.082832  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:19.082843  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:19.082857  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:19.202501  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:19.202545  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:19.221253  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:19.221346  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:19.311057  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:19.311138  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:19.311172  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:19.351947  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:19.351994  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:19.405038  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:19.405079  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:19.449168  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:19.449210  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:19.516475  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:19.516518  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:19.556284  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:19.556324  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:19.600214  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:19.600248  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:19.667408  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:19.667453  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:19.712773  218693 logs.go:123] Gathering logs for kube-controller-manager [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f] ...
	I1122 00:20:19.712809  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:19.747902  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:19.747943  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:24.013741  260527 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:20:24.013841  260527 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:20:24.013971  260527 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:20:24.014051  260527 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:20:24.014118  260527 kubeadm.go:319] OS: Linux
	I1122 00:20:24.014182  260527 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:20:24.014342  260527 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:20:24.014400  260527 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:20:24.014481  260527 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:20:24.014580  260527 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:20:24.014656  260527 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:20:24.014752  260527 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:20:24.014831  260527 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:20:24.014932  260527 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:20:24.015087  260527 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:20:24.015224  260527 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:20:24.015326  260527 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:20:24.018013  260527 out.go:252]   - Generating certificates and keys ...
	I1122 00:20:24.018127  260527 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:20:24.018237  260527 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:20:24.018376  260527 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:20:24.018448  260527 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:20:24.018509  260527 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:20:24.018566  260527 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:20:24.018652  260527 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:20:24.018800  260527 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-491677 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:20:24.018874  260527 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:20:24.019069  260527 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-491677 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:20:24.019133  260527 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:20:24.019192  260527 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:20:24.019236  260527 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:20:24.019319  260527 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:20:24.019387  260527 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:20:24.019472  260527 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:20:24.019550  260527 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:20:24.019653  260527 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:20:24.019755  260527 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:20:24.019900  260527 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:20:24.020006  260527 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:20:24.021383  260527 out.go:252]   - Booting up control plane ...
	I1122 00:20:24.021498  260527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:20:24.021574  260527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:20:24.021685  260527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:20:24.021840  260527 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:20:24.022055  260527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:20:24.022224  260527 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:20:24.022409  260527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:20:24.022482  260527 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:20:24.022688  260527 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:20:24.022859  260527 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:20:24.022943  260527 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.48157ms
	I1122 00:20:24.023076  260527 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:20:24.023215  260527 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1122 00:20:24.023334  260527 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:20:24.023413  260527 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1122 00:20:24.023496  260527 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.022807436s
	I1122 00:20:24.023563  260527 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.506727027s
	I1122 00:20:24.023625  260527 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501637409s
	I1122 00:20:24.023715  260527 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:20:24.023826  260527 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:20:24.023880  260527 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:20:24.024111  260527 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-491677 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:20:24.024209  260527 kubeadm.go:319] [bootstrap-token] Using token: zuydkb.uvh9448kov8j9p0k
	I1122 00:20:24.026466  260527 out.go:252]   - Configuring RBAC rules ...
	I1122 00:20:24.026583  260527 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:20:24.026681  260527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:20:24.026862  260527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:20:24.027045  260527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:20:24.027192  260527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:20:24.027307  260527 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:20:24.027453  260527 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:20:24.027507  260527 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:20:24.027586  260527 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:20:24.027594  260527 kubeadm.go:319] 
	I1122 00:20:24.027679  260527 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:20:24.027687  260527 kubeadm.go:319] 
	I1122 00:20:24.027780  260527 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:20:24.027788  260527 kubeadm.go:319] 
	I1122 00:20:24.027832  260527 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:20:24.028013  260527 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:20:24.028100  260527 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:20:24.028108  260527 kubeadm.go:319] 
	I1122 00:20:24.028209  260527 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:20:24.028222  260527 kubeadm.go:319] 
	I1122 00:20:24.028290  260527 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:20:24.028300  260527 kubeadm.go:319] 
	I1122 00:20:24.028367  260527 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:20:24.028476  260527 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:20:24.028653  260527 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:20:24.028671  260527 kubeadm.go:319] 
	I1122 00:20:24.028801  260527 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:20:24.028946  260527 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:20:24.028964  260527 kubeadm.go:319] 
	I1122 00:20:24.029080  260527 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zuydkb.uvh9448kov8j9p0k \
	I1122 00:20:24.029247  260527 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2af5fc9ecf777c709212eeb70ba373979920cc452e3ef3a8f29babe0281d5739 \
	I1122 00:20:24.029294  260527 kubeadm.go:319] 	--control-plane 
	I1122 00:20:24.029301  260527 kubeadm.go:319] 
	I1122 00:20:24.029452  260527 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:20:24.029466  260527 kubeadm.go:319] 
	I1122 00:20:24.029655  260527 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zuydkb.uvh9448kov8j9p0k \
	I1122 00:20:24.029832  260527 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2af5fc9ecf777c709212eeb70ba373979920cc452e3ef3a8f29babe0281d5739 
	I1122 00:20:24.029849  260527 cni.go:84] Creating CNI manager for ""
	I1122 00:20:24.029857  260527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:20:24.031762  260527 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1be7176c234f3       56cc512116c8f       7 seconds ago       Running             busybox                   0                   1564f6b28ec4d       busybox                                     default
	b61337c7649d1       52546a367cc9e       12 seconds ago      Running             coredns                   0                   40840a536016c       coredns-66bc5c9577-9wcct                    kube-system
	a8df28ee53bb6       6e38f40d628db       12 seconds ago      Running             storage-provisioner       0                   a70c94b4c1943       storage-provisioner                         kube-system
	304e6535bf7be       409467f978b4a       23 seconds ago      Running             kindnet-cni               0                   068dfc53e6eb8       kindnet-llcnc                               kube-system
	2b0f0e4e1df6d       fc25172553d79       27 seconds ago      Running             kube-proxy                0                   85fd4cd4e5d99       kube-proxy-685jg                            kube-system
	13c5477f80d07       c80c8dbafe7dd       38 seconds ago      Running             kube-controller-manager   0                   b6ae800cc9296       kube-controller-manager-no-preload-781232   kube-system
	6b02e9e9a0792       7dd6aaa1717ab       38 seconds ago      Running             kube-scheduler            0                   3af2c78e96fc1       kube-scheduler-no-preload-781232            kube-system
	7f1227117afb1       c3994bc696102       38 seconds ago      Running             kube-apiserver            0                   be95c3994ed3e       kube-apiserver-no-preload-781232            kube-system
	190bb0852270a       5f1f5298c888d       38 seconds ago      Running             etcd                      0                   3f1e015b9de63       etcd-no-preload-781232                      kube-system
	
	
	==> containerd <==
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.125026462Z" level=info msg="Container b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.128227043Z" level=info msg="CreateContainer within sandbox \"a70c94b4c1943564b88b616b626e0c720041932bf4d08a29afacedb7821e49d6\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"a8df28ee53bb60379874726c9a896717f75e12fd13a7316e60ad11da58feca4a\""
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.128922999Z" level=info msg="StartContainer for \"a8df28ee53bb60379874726c9a896717f75e12fd13a7316e60ad11da58feca4a\""
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.130977320Z" level=info msg="connecting to shim a8df28ee53bb60379874726c9a896717f75e12fd13a7316e60ad11da58feca4a" address="unix:///run/containerd/s/a41072d8e56c0c4fd852fc058c033ea42aa1d30a23fb4a4e2d21bc0cf055ef17" protocol=ttrpc version=3
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.135197186Z" level=info msg="CreateContainer within sandbox \"40840a536016c7c55af754ac43b03e221f1e60e49a2788ad5f3cf727dfb8737b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be\""
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.135926316Z" level=info msg="StartContainer for \"b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be\""
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.137158693Z" level=info msg="connecting to shim b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be" address="unix:///run/containerd/s/b70595bb46ca14d96b4daefe8d0b2298a7d6dc2f56420769b86ef6dc7df0b4d8" protocol=ttrpc version=3
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.191141980Z" level=info msg="StartContainer for \"a8df28ee53bb60379874726c9a896717f75e12fd13a7316e60ad11da58feca4a\" returns successfully"
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.196884971Z" level=info msg="StartContainer for \"b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be\" returns successfully"
	Nov 22 00:20:14 no-preload-781232 containerd[659]: time="2025-11-22T00:20:14.895991422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:c9470f46-fa0e-479c-82bc-857ad36201bf,Namespace:default,Attempt:0,}"
	Nov 22 00:20:14 no-preload-781232 containerd[659]: time="2025-11-22T00:20:14.941640673Z" level=info msg="connecting to shim 1564f6b28ec4dff922fb583de118d245a6ac03f32306a9cc980e0038aecbf0a8" address="unix:///run/containerd/s/b2df10ad6ace202b32f7a35c18d5e2dd63a4edfdd3c65601dfc1d680d40dd139" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:20:15 no-preload-781232 containerd[659]: time="2025-11-22T00:20:15.018828312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:c9470f46-fa0e-479c-82bc-857ad36201bf,Namespace:default,Attempt:0,} returns sandbox id \"1564f6b28ec4dff922fb583de118d245a6ac03f32306a9cc980e0038aecbf0a8\""
	Nov 22 00:20:15 no-preload-781232 containerd[659]: time="2025-11-22T00:20:15.021209974Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.219026390Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.219980963Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396644"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.221627420Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.224055902Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.224595592Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.203346709s"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.224633234Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.230110579Z" level=info msg="CreateContainer within sandbox \"1564f6b28ec4dff922fb583de118d245a6ac03f32306a9cc980e0038aecbf0a8\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.240493581Z" level=info msg="Container 1be7176c234f3a80674fd6f9b54181ed294ceb48ab785db551e1cf298de28067: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.247404428Z" level=info msg="CreateContainer within sandbox \"1564f6b28ec4dff922fb583de118d245a6ac03f32306a9cc980e0038aecbf0a8\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"1be7176c234f3a80674fd6f9b54181ed294ceb48ab785db551e1cf298de28067\""
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.248085899Z" level=info msg="StartContainer for \"1be7176c234f3a80674fd6f9b54181ed294ceb48ab785db551e1cf298de28067\""
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.249115545Z" level=info msg="connecting to shim 1be7176c234f3a80674fd6f9b54181ed294ceb48ab785db551e1cf298de28067" address="unix:///run/containerd/s/b2df10ad6ace202b32f7a35c18d5e2dd63a4edfdd3c65601dfc1d680d40dd139" protocol=ttrpc version=3
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.315789801Z" level=info msg="StartContainer for \"1be7176c234f3a80674fd6f9b54181ed294ceb48ab785db551e1cf298de28067\" returns successfully"
	
	
	==> coredns [b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48969 - 46525 "HINFO IN 6647442209668263628.3620737544070114. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.024232555s
	
	
	==> describe nodes <==
	Name:               no-preload-781232
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-781232
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=no-preload-781232
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_19_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:19:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-781232
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:20:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:20:21 +0000   Sat, 22 Nov 2025 00:19:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:20:21 +0000   Sat, 22 Nov 2025 00:19:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:20:21 +0000   Sat, 22 Nov 2025 00:19:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:20:21 +0000   Sat, 22 Nov 2025 00:20:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-781232
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                34f9a952-9825-419d-98a4-5c9d048a8949
	  Boot ID:                    725aae03-f893-4e0b-b029-cbd3b00ccfdd
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-9wcct                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-no-preload-781232                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-llcnc                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-no-preload-781232             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-781232    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-685jg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-no-preload-781232             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  33s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node no-preload-781232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node no-preload-781232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node no-preload-781232 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node no-preload-781232 event: Registered Node no-preload-781232 in Controller
	  Normal  NodeReady                13s   kubelet          Node no-preload-781232 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000865] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.410276] i8042: Warning: Keylock active
	[  +0.014947] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.495836] block sda: the capability attribute has been deprecated.
	[  +0.091740] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024333] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.452540] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [190bb0852270abcf17fda286c6be5e9fcb36eb2b98dcf07cf71fa2985c5db26b] <==
	{"level":"warn","ts":"2025-11-22T00:19:48.020496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.029344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.036578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.044633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.052341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.059700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.067305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.075178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.083428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.091252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.098111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.105126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.115320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.122949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.130869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.138077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.153643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.158220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.165369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.172568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.187849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.195427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.203497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.251107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:06.065444Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.807983ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766331608303053 > lease_revoke:<id:5b339aa8ee6283fb>","response":"size:29"}
	
	
	==> kernel <==
	 00:20:24 up  1:02,  0 user,  load average: 5.71, 3.68, 2.28
	Linux no-preload-781232 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [304e6535bf7bedf2a516b8d232b19d3e038abaca4c8c450355eade98b387f580] <==
	I1122 00:20:01.282827       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:20:01.283115       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1122 00:20:01.285363       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:20:01.285391       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:20:01.285415       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:20:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:20:01.579338       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:20:01.579365       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:20:01.579401       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:20:01.579930       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:20:02.079584       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:20:02.079629       1 metrics.go:72] Registering metrics
	I1122 00:20:02.079738       1 controller.go:711] "Syncing nftables rules"
	I1122 00:20:11.580110       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:20:11.580197       1 main.go:301] handling current node
	I1122 00:20:21.579658       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:20:21.579692       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7f1227117afb11933863eec6c929a38cd5f7c89c181f267ac92151e7d68ac0bb] <==
	E1122 00:19:48.903533       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1122 00:19:48.948724       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:19:48.966774       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:19:48.966780       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:19:48.973236       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:19:48.974951       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:19:49.082403       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:19:49.751231       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:19:49.755074       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:19:49.755097       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:19:50.385719       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:19:50.430829       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:19:50.558323       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:19:50.566618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1122 00:19:50.567866       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:19:50.572538       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:19:50.989615       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:19:51.569130       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:19:51.579846       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:19:51.587753       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:19:56.392066       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:19:56.397169       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:19:57.040752       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:19:57.089341       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1122 00:20:23.692415       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:44986: use of closed network connection
	
	
	==> kube-controller-manager [13c5477f80d07937f3038c381810143f379c1a5724ad58b9f212e7d95e199ef6] <==
	I1122 00:19:55.943773       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:19:55.948143       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-781232" podCIDRs=["10.244.0.0/24"]
	I1122 00:19:55.951028       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1122 00:19:55.952216       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:19:55.958685       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:19:55.967211       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:19:55.969502       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:19:55.985506       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:19:55.987778       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:19:55.987818       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:19:55.987843       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:19:55.987860       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:19:55.987891       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:19:55.987892       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:19:55.988056       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:19:55.988157       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:19:55.988196       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:19:55.988451       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:19:55.988570       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:19:55.993493       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:19:55.995762       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:19:55.999041       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:19:56.003456       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:19:56.009732       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:20:15.940020       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2b0f0e4e1df6d003c1fd5d63a2d88caf527a5828be1e719b714f70bf70e013e6] <==
	I1122 00:19:57.745181       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:19:57.820374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:19:57.920741       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:19:57.920805       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1122 00:19:57.920908       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:19:57.944005       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:19:57.944068       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:19:57.949691       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:19:57.950216       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:19:57.950247       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:19:57.951713       1 config.go:200] "Starting service config controller"
	I1122 00:19:57.951744       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:19:57.952068       1 config.go:309] "Starting node config controller"
	I1122 00:19:57.952079       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:19:57.952087       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:19:57.952127       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:19:57.952133       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:19:57.952152       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:19:57.952157       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:19:58.052730       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:19:58.052758       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:19:58.052792       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6b02e9e9a07928c42cf1e5bb58d45de4ce420454640d91b3f098f98aa2f59ca6] <==
	E1122 00:19:49.252868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1122 00:19:49.252981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:19:49.253033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:19:49.253095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:19:49.253096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:19:49.253177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:19:49.253195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:19:49.253304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:19:49.253822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:19:49.254123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:19:49.254316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:19:49.254409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:19:49.254603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:19:49.255138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:19:49.255326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:19:49.255451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:19:49.255463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:19:49.255487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:19:49.255552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:19:50.077997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:19:50.105397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:19:50.128752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:19:50.191530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:19:50.320610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1122 00:19:52.548275       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:19:52 no-preload-781232 kubelet[2185]: I1122 00:19:52.497026    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-781232" podStartSLOduration=1.4970037889999999 podStartE2EDuration="1.497003789s" podCreationTimestamp="2025-11-22 00:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:52.485725647 +0000 UTC m=+1.155699844" watchObservedRunningTime="2025-11-22 00:19:52.497003789 +0000 UTC m=+1.166977980"
	Nov 22 00:19:52 no-preload-781232 kubelet[2185]: I1122 00:19:52.507726    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-781232" podStartSLOduration=1.5077082979999998 podStartE2EDuration="1.507708298s" podCreationTimestamp="2025-11-22 00:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:52.497358614 +0000 UTC m=+1.167332813" watchObservedRunningTime="2025-11-22 00:19:52.507708298 +0000 UTC m=+1.177682496"
	Nov 22 00:19:52 no-preload-781232 kubelet[2185]: I1122 00:19:52.524221    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-781232" podStartSLOduration=1.524201804 podStartE2EDuration="1.524201804s" podCreationTimestamp="2025-11-22 00:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:52.523919549 +0000 UTC m=+1.193893746" watchObservedRunningTime="2025-11-22 00:19:52.524201804 +0000 UTC m=+1.194176001"
	Nov 22 00:19:52 no-preload-781232 kubelet[2185]: I1122 00:19:52.524428    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-781232" podStartSLOduration=2.5244149670000002 podStartE2EDuration="2.524414967s" podCreationTimestamp="2025-11-22 00:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:52.508124285 +0000 UTC m=+1.178098482" watchObservedRunningTime="2025-11-22 00:19:52.524414967 +0000 UTC m=+1.194389144"
	Nov 22 00:19:55 no-preload-781232 kubelet[2185]: I1122 00:19:55.977925    2185 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:19:55 no-preload-781232 kubelet[2185]: I1122 00:19:55.978713    2185 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141500    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33a2d2c1-e364-4ec8-a9a0-69ba9146625f-xtables-lock\") pod \"kube-proxy-685jg\" (UID: \"33a2d2c1-e364-4ec8-a9a0-69ba9146625f\") " pod="kube-system/kube-proxy-685jg"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141537    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33a2d2c1-e364-4ec8-a9a0-69ba9146625f-lib-modules\") pod \"kube-proxy-685jg\" (UID: \"33a2d2c1-e364-4ec8-a9a0-69ba9146625f\") " pod="kube-system/kube-proxy-685jg"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141556    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw28b\" (UniqueName: \"kubernetes.io/projected/33a2d2c1-e364-4ec8-a9a0-69ba9146625f-kube-api-access-zw28b\") pod \"kube-proxy-685jg\" (UID: \"33a2d2c1-e364-4ec8-a9a0-69ba9146625f\") " pod="kube-system/kube-proxy-685jg"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141576    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcdd9f25-4804-47c2-8f09-b6a2d688a8bc-xtables-lock\") pod \"kindnet-llcnc\" (UID: \"fcdd9f25-4804-47c2-8f09-b6a2d688a8bc\") " pod="kube-system/kindnet-llcnc"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141635    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcdd9f25-4804-47c2-8f09-b6a2d688a8bc-lib-modules\") pod \"kindnet-llcnc\" (UID: \"fcdd9f25-4804-47c2-8f09-b6a2d688a8bc\") " pod="kube-system/kindnet-llcnc"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141684    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgjjc\" (UniqueName: \"kubernetes.io/projected/fcdd9f25-4804-47c2-8f09-b6a2d688a8bc-kube-api-access-tgjjc\") pod \"kindnet-llcnc\" (UID: \"fcdd9f25-4804-47c2-8f09-b6a2d688a8bc\") " pod="kube-system/kindnet-llcnc"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141740    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fcdd9f25-4804-47c2-8f09-b6a2d688a8bc-cni-cfg\") pod \"kindnet-llcnc\" (UID: \"fcdd9f25-4804-47c2-8f09-b6a2d688a8bc\") " pod="kube-system/kindnet-llcnc"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141773    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33a2d2c1-e364-4ec8-a9a0-69ba9146625f-kube-proxy\") pod \"kube-proxy-685jg\" (UID: \"33a2d2c1-e364-4ec8-a9a0-69ba9146625f\") " pod="kube-system/kube-proxy-685jg"
	Nov 22 00:19:58 no-preload-781232 kubelet[2185]: I1122 00:19:58.475239    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-685jg" podStartSLOduration=1.475209261 podStartE2EDuration="1.475209261s" podCreationTimestamp="2025-11-22 00:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:58.474998555 +0000 UTC m=+7.144972752" watchObservedRunningTime="2025-11-22 00:19:58.475209261 +0000 UTC m=+7.145183457"
	Nov 22 00:20:01 no-preload-781232 kubelet[2185]: I1122 00:20:01.484255    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-llcnc" podStartSLOduration=1.318787825 podStartE2EDuration="4.484237697s" podCreationTimestamp="2025-11-22 00:19:57 +0000 UTC" firstStartedPulling="2025-11-22 00:19:57.782479973 +0000 UTC m=+6.452454162" lastFinishedPulling="2025-11-22 00:20:00.947929854 +0000 UTC m=+9.617904034" observedRunningTime="2025-11-22 00:20:01.484049069 +0000 UTC m=+10.154023264" watchObservedRunningTime="2025-11-22 00:20:01.484237697 +0000 UTC m=+10.154211885"
	Nov 22 00:20:11 no-preload-781232 kubelet[2185]: I1122 00:20:11.652649    2185 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:20:11 no-preload-781232 kubelet[2185]: I1122 00:20:11.739640    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/904bdf70-7728-45c5-a9ae-487aed28e6fc-tmp\") pod \"storage-provisioner\" (UID: \"904bdf70-7728-45c5-a9ae-487aed28e6fc\") " pod="kube-system/storage-provisioner"
	Nov 22 00:20:11 no-preload-781232 kubelet[2185]: I1122 00:20:11.739695    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjxz7\" (UniqueName: \"kubernetes.io/projected/904bdf70-7728-45c5-a9ae-487aed28e6fc-kube-api-access-xjxz7\") pod \"storage-provisioner\" (UID: \"904bdf70-7728-45c5-a9ae-487aed28e6fc\") " pod="kube-system/storage-provisioner"
	Nov 22 00:20:11 no-preload-781232 kubelet[2185]: I1122 00:20:11.739725    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67b97cc5-016b-44d1-8119-dd6aa4932f83-config-volume\") pod \"coredns-66bc5c9577-9wcct\" (UID: \"67b97cc5-016b-44d1-8119-dd6aa4932f83\") " pod="kube-system/coredns-66bc5c9577-9wcct"
	Nov 22 00:20:11 no-preload-781232 kubelet[2185]: I1122 00:20:11.739751    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkgkd\" (UniqueName: \"kubernetes.io/projected/67b97cc5-016b-44d1-8119-dd6aa4932f83-kube-api-access-tkgkd\") pod \"coredns-66bc5c9577-9wcct\" (UID: \"67b97cc5-016b-44d1-8119-dd6aa4932f83\") " pod="kube-system/coredns-66bc5c9577-9wcct"
	Nov 22 00:20:12 no-preload-781232 kubelet[2185]: I1122 00:20:12.528668    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9wcct" podStartSLOduration=15.528640775 podStartE2EDuration="15.528640775s" podCreationTimestamp="2025-11-22 00:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:12.513567573 +0000 UTC m=+21.183541794" watchObservedRunningTime="2025-11-22 00:20:12.528640775 +0000 UTC m=+21.198614973"
	Nov 22 00:20:14 no-preload-781232 kubelet[2185]: I1122 00:20:14.582118    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.582090447 podStartE2EDuration="17.582090447s" podCreationTimestamp="2025-11-22 00:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:12.544916262 +0000 UTC m=+21.214890459" watchObservedRunningTime="2025-11-22 00:20:14.582090447 +0000 UTC m=+23.252064644"
	Nov 22 00:20:14 no-preload-781232 kubelet[2185]: I1122 00:20:14.657660    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcdq4\" (UniqueName: \"kubernetes.io/projected/c9470f46-fa0e-479c-82bc-857ad36201bf-kube-api-access-tcdq4\") pod \"busybox\" (UID: \"c9470f46-fa0e-479c-82bc-857ad36201bf\") " pod="default/busybox"
	Nov 22 00:20:17 no-preload-781232 kubelet[2185]: I1122 00:20:17.529121    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.3241123 podStartE2EDuration="3.529088108s" podCreationTimestamp="2025-11-22 00:20:14 +0000 UTC" firstStartedPulling="2025-11-22 00:20:15.020606103 +0000 UTC m=+23.690580283" lastFinishedPulling="2025-11-22 00:20:17.225581913 +0000 UTC m=+25.895556091" observedRunningTime="2025-11-22 00:20:17.528843748 +0000 UTC m=+26.198817946" watchObservedRunningTime="2025-11-22 00:20:17.529088108 +0000 UTC m=+26.199062305"
	
	
	==> storage-provisioner [a8df28ee53bb60379874726c9a896717f75e12fd13a7316e60ad11da58feca4a] <==
	I1122 00:20:12.203980       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:20:12.215498       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:20:12.215555       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:20:12.218724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:12.226119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:20:12.226503       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:20:12.226810       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"663f7033-f1d0-4a6a-a7b5-6ae68ff1b408", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-781232_34fa41f9-0564-4cf9-a793-d3e8600ab02c became leader
	I1122 00:20:12.226865       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-781232_34fa41f9-0564-4cf9-a793-d3e8600ab02c!
	W1122 00:20:12.232654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:12.239832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:20:12.327083       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-781232_34fa41f9-0564-4cf9-a793-d3e8600ab02c!
	W1122 00:20:14.243309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:14.248432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:16.251969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:16.256425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:18.260452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:18.266363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:20.270417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:20.274881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:22.278341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:22.283874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:24.291521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:24.300020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-781232 -n no-preload-781232
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-781232 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-781232
helpers_test.go:243: (dbg) docker inspect no-preload-781232:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801",
	        "Created": "2025-11-22T00:19:23.714697998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 251859,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:19:23.763938006Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801/hostname",
	        "HostsPath": "/var/lib/docker/containers/e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801/hosts",
	        "LogPath": "/var/lib/docker/containers/e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801/e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801-json.log",
	        "Name": "/no-preload-781232",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-781232:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-781232",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e6866ff20d68f8b2ae22665293c4e6a886fd27072201fb8e2c70d38fec0d6801",
	                "LowerDir": "/var/lib/docker/overlay2/ffb802e3234af36569acaf9598f08ce82b2457943278e51bdea70ae4987b4b7e-init/diff:/var/lib/docker/overlay2/4b4af9a4e857911a6b5096aeeaee227ee7577c6eff3b08bbb4e765c49ed2fb70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ffb802e3234af36569acaf9598f08ce82b2457943278e51bdea70ae4987b4b7e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ffb802e3234af36569acaf9598f08ce82b2457943278e51bdea70ae4987b4b7e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ffb802e3234af36569acaf9598f08ce82b2457943278e51bdea70ae4987b4b7e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-781232",
	                "Source": "/var/lib/docker/volumes/no-preload-781232/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-781232",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-781232",
	                "name.minikube.sigs.k8s.io": "no-preload-781232",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6cc2b0abcc4b89d4773e5b4ef90cec1849441e89ee9f2f96b3f073bacf5664b0",
	            "SandboxKey": "/var/run/docker/netns/6cc2b0abcc4b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-781232": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cca4389e0847032e4c18e18b7945e1c2646a84dee2b87d0f44df9d94c78a3170",
	                    "EndpointID": "5eb2eb6bb07c716470bc95040f0f020393f43f81701c3f040328b16c8328525a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "f6:28:1a:7f:f0:68",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-781232",
	                        "e6866ff20d68"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-781232 -n no-preload-781232
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-781232 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-781232 logs -n 25: (1.038002012s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-687868 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo containerd config dump                                                                                                                                                                                                        │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo crio config                                                                                                                                                                                                                   │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ delete  │ -p cilium-687868                                                                                                                                                                                                                                    │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p old-k8s-version-462319 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ ssh     │ -p NoKubernetes-714059 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ start   │ -p cert-expiration-427330 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-427330 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ delete  │ -p cert-expiration-427330                                                                                                                                                                                                                           │ cert-expiration-427330 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p no-preload-781232 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ stop    │ -p NoKubernetes-714059                                                                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p NoKubernetes-714059 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ ssh     │ -p NoKubernetes-714059 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ delete  │ -p NoKubernetes-714059                                                                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ start   │ -p embed-certs-491677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-491677     │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-462319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ stop    │ -p old-k8s-version-462319 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:20:01
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:20:01.497017  260527 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:20:01.497324  260527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:01.497336  260527 out.go:374] Setting ErrFile to fd 2...
	I1122 00:20:01.497340  260527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:01.497588  260527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:20:01.498054  260527 out.go:368] Setting JSON to false
	I1122 00:20:01.499443  260527 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3740,"bootTime":1763767061,"procs":385,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:20:01.499503  260527 start.go:143] virtualization: kvm guest
	I1122 00:20:01.501458  260527 out.go:179] * [embed-certs-491677] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:20:01.503562  260527 notify.go:221] Checking for updates...
	I1122 00:20:01.503572  260527 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:20:01.505088  260527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:20:01.506758  260527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:20:01.508287  260527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1122 00:20:01.509699  260527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:20:01.511183  260527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:20:01.513382  260527 config.go:182] Loaded profile config "kubernetes-upgrade-882262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:01.513541  260527 config.go:182] Loaded profile config "no-preload-781232": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:01.513638  260527 config.go:182] Loaded profile config "old-k8s-version-462319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1122 00:20:01.513752  260527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:20:01.545401  260527 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:20:01.545504  260527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:01.611105  260527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-22 00:20:01.601298329 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:01.611234  260527 docker.go:319] overlay module found
	I1122 00:20:01.613226  260527 out.go:179] * Using the docker driver based on user configuration
	I1122 00:20:01.614649  260527 start.go:309] selected driver: docker
	I1122 00:20:01.614666  260527 start.go:930] validating driver "docker" against <nil>
	I1122 00:20:01.614677  260527 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:20:01.615350  260527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:01.674666  260527 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:20:01.664354692 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:01.674876  260527 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:20:01.675176  260527 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:01.676975  260527 out.go:179] * Using Docker driver with root privileges
	I1122 00:20:01.678251  260527 cni.go:84] Creating CNI manager for ""
	I1122 00:20:01.678367  260527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:20:01.678383  260527 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:20:01.678481  260527 start.go:353] cluster config:
	{Name:embed-certs-491677 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:20:01.680036  260527 out.go:179] * Starting "embed-certs-491677" primary control-plane node in "embed-certs-491677" cluster
	I1122 00:20:01.683810  260527 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:20:01.685242  260527 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:20:01.686680  260527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:01.686729  260527 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1122 00:20:01.686743  260527 cache.go:65] Caching tarball of preloaded images
	I1122 00:20:01.686775  260527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:20:01.686916  260527 preload.go:238] Found /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 00:20:01.686942  260527 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1122 00:20:01.687116  260527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/config.json ...
	I1122 00:20:01.687148  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/config.json: {Name:mkf02d672882aad1c3b94e79745f8cf62e3f5b13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:01.708872  260527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:20:01.708897  260527 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:20:01.708914  260527 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:20:01.708943  260527 start.go:360] acquireMachinesLock for embed-certs-491677: {Name:mkbe59d49caffedca862a9ecb177d8d82196efdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:01.709044  260527 start.go:364] duration metric: took 84.98µs to acquireMachinesLock for "embed-certs-491677"
	I1122 00:20:01.709067  260527 start.go:93] Provisioning new machine with config: &{Name:embed-certs-491677 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:20:01.709131  260527 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:19:58.829298  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:19:58.829759  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:19:58.829815  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:19:58.829864  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:19:58.856999  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:19:58.857027  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:19:58.857033  218693 cri.go:89] found id: ""
	I1122 00:19:58.857044  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:19:58.857093  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.861107  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.865268  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:19:58.865337  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:19:58.892542  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:19:58.892564  218693 cri.go:89] found id: ""
	I1122 00:19:58.892572  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:19:58.892626  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.896771  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:19:58.896846  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:19:58.925628  218693 cri.go:89] found id: ""
	I1122 00:19:58.925652  218693 logs.go:282] 0 containers: []
	W1122 00:19:58.925660  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:19:58.925666  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:19:58.925724  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:19:58.955304  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:19:58.955326  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:19:58.955332  218693 cri.go:89] found id: ""
	I1122 00:19:58.955340  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:19:58.955397  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.959396  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:58.963562  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:19:58.963626  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:19:58.991860  218693 cri.go:89] found id: ""
	I1122 00:19:58.991883  218693 logs.go:282] 0 containers: []
	W1122 00:19:58.991890  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:19:58.991895  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:19:58.991949  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:19:59.020457  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:19:59.020483  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:19:59.020489  218693 cri.go:89] found id: ""
	I1122 00:19:59.020502  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:19:59.020550  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:59.024967  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:19:59.031778  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:19:59.031854  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:19:59.061726  218693 cri.go:89] found id: ""
	I1122 00:19:59.061752  218693 logs.go:282] 0 containers: []
	W1122 00:19:59.061763  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:19:59.061771  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:19:59.061831  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:19:59.089141  218693 cri.go:89] found id: ""
	I1122 00:19:59.089164  218693 logs.go:282] 0 containers: []
	W1122 00:19:59.089174  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:19:59.089185  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:19:59.089198  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:19:59.186417  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:19:59.186452  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:19:59.201060  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:19:59.201095  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:19:59.264254  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:19:59.264297  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:19:59.264313  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:19:59.303605  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:19:59.303643  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:19:59.358382  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:19:59.358425  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:19:59.398629  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:19:59.398669  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:19:59.449463  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:19:59.449505  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:19:59.487365  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:19:59.487403  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:19:59.526046  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:19:59.526080  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:19:59.562812  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:19:59.562843  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:19:59.594191  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:19:59.594230  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:02.129372  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:02.129923  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:02.130004  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:02.130071  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:02.161455  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:02.161484  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:02.161490  218693 cri.go:89] found id: ""
	I1122 00:20:02.161501  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:02.161563  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.165824  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.170451  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:02.170522  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:19:58.029853  251199 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-781232" context rescaled to 1 replicas
	W1122 00:19:59.529847  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:01.530493  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:00.520224  247021 node_ready.go:57] node "old-k8s-version-462319" has "Ready":"False" status (will retry)
	I1122 00:20:01.019651  247021 node_ready.go:49] node "old-k8s-version-462319" is "Ready"
	I1122 00:20:01.019681  247021 node_ready.go:38] duration metric: took 14.003330086s for node "old-k8s-version-462319" to be "Ready" ...
	I1122 00:20:01.019696  247021 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:01.019743  247021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:01.032926  247021 api_server.go:72] duration metric: took 14.481952557s to wait for apiserver process to appear ...
	I1122 00:20:01.032954  247021 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:01.032973  247021 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:20:01.039899  247021 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1122 00:20:01.041146  247021 api_server.go:141] control plane version: v1.28.0
	I1122 00:20:01.041172  247021 api_server.go:131] duration metric: took 8.212119ms to wait for apiserver health ...
	I1122 00:20:01.041191  247021 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:20:01.044815  247021 system_pods.go:59] 8 kube-system pods found
	I1122 00:20:01.044853  247021 system_pods.go:61] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.044862  247021 system_pods.go:61] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.044874  247021 system_pods.go:61] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.044879  247021 system_pods.go:61] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.044888  247021 system_pods.go:61] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.044897  247021 system_pods.go:61] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.044901  247021 system_pods.go:61] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.044909  247021 system_pods.go:61] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.044918  247021 system_pods.go:74] duration metric: took 3.718269ms to wait for pod list to return data ...
	I1122 00:20:01.044929  247021 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:20:01.047150  247021 default_sa.go:45] found service account: "default"
	I1122 00:20:01.047173  247021 default_sa.go:55] duration metric: took 2.236156ms for default service account to be created ...
	I1122 00:20:01.047182  247021 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:20:01.050474  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.050506  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.050514  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.050523  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.050528  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.050533  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.050539  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.050544  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.050551  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.050577  247021 retry.go:31] will retry after 205.575764ms: missing components: kube-dns
	I1122 00:20:01.261814  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.261847  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.261859  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.261865  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.261869  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.261873  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.261877  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.261879  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.261884  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.261900  247021 retry.go:31] will retry after 236.21482ms: missing components: kube-dns
	I1122 00:20:01.502877  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.502913  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.502921  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.502929  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.502935  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.502952  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.502957  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.502962  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.502984  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:01.503005  247021 retry.go:31] will retry after 442.873739ms: missing components: kube-dns
	I1122 00:20:01.950449  247021 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:01.950483  247021 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:01.950492  247021 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running
	I1122 00:20:01.950500  247021 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running
	I1122 00:20:01.950505  247021 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running
	I1122 00:20:01.950516  247021 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running
	I1122 00:20:01.950521  247021 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running
	I1122 00:20:01.950526  247021 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running
	I1122 00:20:01.950530  247021 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Running
	I1122 00:20:01.950541  247021 system_pods.go:126] duration metric: took 903.352039ms to wait for k8s-apps to be running ...
	I1122 00:20:01.950553  247021 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:20:01.950602  247021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:01.964580  247021 system_svc.go:56] duration metric: took 14.015441ms WaitForService to wait for kubelet
	I1122 00:20:01.964612  247021 kubeadm.go:587] duration metric: took 15.413644993s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:01.964634  247021 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:20:01.968157  247021 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:20:01.968185  247021 node_conditions.go:123] node cpu capacity is 8
	I1122 00:20:01.968205  247021 node_conditions.go:105] duration metric: took 3.565831ms to run NodePressure ...
	I1122 00:20:01.968227  247021 start.go:242] waiting for startup goroutines ...
	I1122 00:20:01.968237  247021 start.go:247] waiting for cluster config update ...
	I1122 00:20:01.968254  247021 start.go:256] writing updated cluster config ...
	I1122 00:20:01.968545  247021 ssh_runner.go:195] Run: rm -f paused
	I1122 00:20:01.972712  247021 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:01.976920  247021 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pqbfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.983354  247021 pod_ready.go:94] pod "coredns-5dd5756b68-pqbfp" is "Ready"
	I1122 00:20:02.983385  247021 pod_ready.go:86] duration metric: took 1.00643947s for pod "coredns-5dd5756b68-pqbfp" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.987209  247021 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.992024  247021 pod_ready.go:94] pod "etcd-old-k8s-version-462319" is "Ready"
	I1122 00:20:02.992053  247021 pod_ready.go:86] duration metric: took 4.821819ms for pod "etcd-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.994875  247021 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:02.998765  247021 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-462319" is "Ready"
	I1122 00:20:02.998789  247021 pod_ready.go:86] duration metric: took 3.892836ms for pod "kube-apiserver-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.001798  247021 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.181579  247021 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-462319" is "Ready"
	I1122 00:20:03.181611  247021 pod_ready.go:86] duration metric: took 179.791243ms for pod "kube-controller-manager-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.381883  247021 pod_ready.go:83] waiting for pod "kube-proxy-kqrng" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.781562  247021 pod_ready.go:94] pod "kube-proxy-kqrng" is "Ready"
	I1122 00:20:03.781594  247021 pod_ready.go:86] duration metric: took 399.684082ms for pod "kube-proxy-kqrng" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:03.981736  247021 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:04.381559  247021 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-462319" is "Ready"
	I1122 00:20:04.381590  247021 pod_ready.go:86] duration metric: took 399.825883ms for pod "kube-scheduler-old-k8s-version-462319" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:04.381604  247021 pod_ready.go:40] duration metric: took 2.408861294s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:04.431804  247021 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1122 00:20:04.435233  247021 out.go:203] 
	W1122 00:20:04.436473  247021 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1122 00:20:04.437863  247021 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1122 00:20:04.439555  247021 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-462319" cluster and "default" namespace by default
	I1122 00:20:01.711315  260527 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:20:01.711555  260527 start.go:159] libmachine.API.Create for "embed-certs-491677" (driver="docker")
	I1122 00:20:01.711610  260527 client.go:173] LocalClient.Create starting
	I1122 00:20:01.711685  260527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem
	I1122 00:20:01.711719  260527 main.go:143] libmachine: Decoding PEM data...
	I1122 00:20:01.711737  260527 main.go:143] libmachine: Parsing certificate...
	I1122 00:20:01.711816  260527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem
	I1122 00:20:01.711837  260527 main.go:143] libmachine: Decoding PEM data...
	I1122 00:20:01.711846  260527 main.go:143] libmachine: Parsing certificate...
	I1122 00:20:01.712184  260527 cli_runner.go:164] Run: docker network inspect embed-certs-491677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:20:01.730686  260527 cli_runner.go:211] docker network inspect embed-certs-491677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:20:01.730752  260527 network_create.go:284] running [docker network inspect embed-certs-491677] to gather additional debugging logs...
	I1122 00:20:01.730771  260527 cli_runner.go:164] Run: docker network inspect embed-certs-491677
	W1122 00:20:01.749708  260527 cli_runner.go:211] docker network inspect embed-certs-491677 returned with exit code 1
	I1122 00:20:01.749739  260527 network_create.go:287] error running [docker network inspect embed-certs-491677]: docker network inspect embed-certs-491677: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-491677 not found
	I1122 00:20:01.749755  260527 network_create.go:289] output of [docker network inspect embed-certs-491677]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-491677 not found
	
	** /stderr **
	I1122 00:20:01.749902  260527 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:20:01.769006  260527 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1df6c22ede91 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:c7:f4:a5:24:54} reservation:<nil>}
	I1122 00:20:01.769731  260527 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7d48551462a8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:3b:0e:74:ee:57} reservation:<nil>}
	I1122 00:20:01.770416  260527 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c50004b7f5b6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:73:1e:0d:b7:11} reservation:<nil>}
	I1122 00:20:01.771113  260527 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-166d2f324fb5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:da:99:1e:87:6f} reservation:<nil>}
	I1122 00:20:01.771891  260527 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ebca10}
	I1122 00:20:01.771919  260527 network_create.go:124] attempt to create docker network embed-certs-491677 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1122 00:20:01.771970  260527 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-491677 embed-certs-491677
	I1122 00:20:01.823460  260527 network_create.go:108] docker network embed-certs-491677 192.168.85.0/24 created
	I1122 00:20:01.823495  260527 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-491677" container
	I1122 00:20:01.823677  260527 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:20:01.843300  260527 cli_runner.go:164] Run: docker volume create embed-certs-491677 --label name.minikube.sigs.k8s.io=embed-certs-491677 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:20:01.863723  260527 oci.go:103] Successfully created a docker volume embed-certs-491677
	I1122 00:20:01.863797  260527 cli_runner.go:164] Run: docker run --rm --name embed-certs-491677-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-491677 --entrypoint /usr/bin/test -v embed-certs-491677:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:20:02.270865  260527 oci.go:107] Successfully prepared a docker volume embed-certs-491677
	I1122 00:20:02.270965  260527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:02.270986  260527 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:20:02.271058  260527 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-491677:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:20:02.204729  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:02.204756  218693 cri.go:89] found id: ""
	I1122 00:20:02.204766  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:02.204829  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.209535  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:02.209603  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:02.247383  218693 cri.go:89] found id: ""
	I1122 00:20:02.247408  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.247416  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:02.247422  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:02.247484  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:02.277440  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:02.277466  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:02.277473  218693 cri.go:89] found id: ""
	I1122 00:20:02.277483  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:02.277545  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.282049  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.286514  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:02.286581  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:02.316706  218693 cri.go:89] found id: ""
	I1122 00:20:02.316733  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.316744  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:02.316753  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:02.316813  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:02.347451  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:02.347471  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:02.347476  218693 cri.go:89] found id: ""
	I1122 00:20:02.347486  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:02.347542  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.352378  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:02.356502  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:02.356561  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:02.384778  218693 cri.go:89] found id: ""
	I1122 00:20:02.384802  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.384814  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:02.384825  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:02.384887  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:02.421102  218693 cri.go:89] found id: ""
	I1122 00:20:02.421131  218693 logs.go:282] 0 containers: []
	W1122 00:20:02.421143  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:02.421156  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:02.421171  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:02.477880  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:02.477924  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:02.574856  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:02.574892  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:02.641120  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:02.641142  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:02.641154  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:02.681648  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:02.681686  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:02.739093  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:02.739128  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:02.774358  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:02.774395  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:02.810272  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:02.810310  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:02.842900  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:02.842942  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:02.857743  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:02.857784  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:02.894229  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:02.894272  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:02.929523  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:02.929555  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:05.459958  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:05.460532  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:05.460597  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:05.460676  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:05.488636  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:05.488658  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:05.488662  218693 cri.go:89] found id: ""
	I1122 00:20:05.488670  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:05.488715  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.492971  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.496804  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:05.496876  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:05.524856  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:05.524883  218693 cri.go:89] found id: ""
	I1122 00:20:05.524902  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:05.524962  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.529434  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:05.529521  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:05.557780  218693 cri.go:89] found id: ""
	I1122 00:20:05.557805  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.557819  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:05.557828  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:05.557885  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:05.586142  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:05.586166  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:05.586173  218693 cri.go:89] found id: ""
	I1122 00:20:05.586184  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:05.586248  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.590458  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.594671  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:05.594752  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:05.623542  218693 cri.go:89] found id: ""
	I1122 00:20:05.623565  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.623575  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:05.623585  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:05.623653  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:05.651642  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:05.651663  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:05.651666  218693 cri.go:89] found id: ""
	I1122 00:20:05.651674  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:05.651724  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.655785  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:05.659668  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:05.659743  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:05.687725  218693 cri.go:89] found id: ""
	I1122 00:20:05.687748  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.687756  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:05.687762  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:05.687810  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:05.714403  218693 cri.go:89] found id: ""
	I1122 00:20:05.714432  218693 logs.go:282] 0 containers: []
	W1122 00:20:05.714444  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:05.714457  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:05.714472  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:05.748851  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:05.748901  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:05.784862  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:05.784899  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:05.813532  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:05.813569  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:05.844930  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:05.844965  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:05.897273  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:05.897337  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:05.935381  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:05.935417  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:06.025566  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:06.025612  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:06.040810  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:06.040843  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:06.102006  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:06.102032  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:06.102050  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:06.136887  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:06.136937  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:06.192634  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:06.192674  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	W1122 00:20:04.029159  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:06.067087  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	I1122 00:20:06.722373  260527 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-491677:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.451238931s)
	I1122 00:20:06.722412  260527 kic.go:203] duration metric: took 4.451422839s to extract preloaded images to volume ...
	W1122 00:20:06.722533  260527 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:20:06.722570  260527 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:20:06.722615  260527 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:20:06.782296  260527 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-491677 --name embed-certs-491677 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-491677 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-491677 --network embed-certs-491677 --ip 192.168.85.2 --volume embed-certs-491677:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:20:07.109552  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Running}}
	I1122 00:20:07.129178  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Status}}
	I1122 00:20:07.148399  260527 cli_runner.go:164] Run: docker exec embed-certs-491677 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:20:07.196229  260527 oci.go:144] the created container "embed-certs-491677" has a running status.
	I1122 00:20:07.196362  260527 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa...
	I1122 00:20:07.257446  260527 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:20:07.289218  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Status}}
	I1122 00:20:07.310559  260527 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:20:07.310578  260527 kic_runner.go:114] Args: [docker exec --privileged embed-certs-491677 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:20:07.351585  260527 cli_runner.go:164] Run: docker container inspect embed-certs-491677 --format={{.State.Status}}
	I1122 00:20:07.374469  260527 machine.go:94] provisionDockerMachine start ...
	I1122 00:20:07.374754  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:07.397641  260527 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:07.397885  260527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:20:07.397902  260527 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:20:07.398578  260527 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36770->127.0.0.1:33073: read: connection reset by peer
	I1122 00:20:10.523553  260527 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-491677
	
	I1122 00:20:10.523587  260527 ubuntu.go:182] provisioning hostname "embed-certs-491677"
	I1122 00:20:10.523652  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:10.544251  260527 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:10.544519  260527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:20:10.544536  260527 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-491677 && echo "embed-certs-491677" | sudo tee /etc/hostname
	I1122 00:20:10.679747  260527 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-491677
	
	I1122 00:20:10.679832  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:10.700586  260527 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:10.700833  260527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:20:10.700858  260527 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-491677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-491677/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-491677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:20:10.825289  260527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:20:10.825326  260527 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9059/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9059/.minikube}
	I1122 00:20:10.825375  260527 ubuntu.go:190] setting up certificates
	I1122 00:20:10.825411  260527 provision.go:84] configureAuth start
	I1122 00:20:10.825489  260527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-491677
	I1122 00:20:10.844220  260527 provision.go:143] copyHostCerts
	I1122 00:20:10.844298  260527 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem, removing ...
	I1122 00:20:10.844307  260527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem
	I1122 00:20:10.844403  260527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem (1082 bytes)
	I1122 00:20:10.844496  260527 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem, removing ...
	I1122 00:20:10.844506  260527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem
	I1122 00:20:10.844532  260527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem (1123 bytes)
	I1122 00:20:10.844590  260527 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem, removing ...
	I1122 00:20:10.844598  260527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem
	I1122 00:20:10.844620  260527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem (1679 bytes)
	I1122 00:20:10.844669  260527 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem org=jenkins.embed-certs-491677 san=[127.0.0.1 192.168.85.2 embed-certs-491677 localhost minikube]
	I1122 00:20:10.881095  260527 provision.go:177] copyRemoteCerts
	I1122 00:20:10.881150  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:20:10.881198  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:10.899974  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:10.993091  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:20:11.014763  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1122 00:20:11.034702  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:20:11.053678  260527 provision.go:87] duration metric: took 228.246896ms to configureAuth
	I1122 00:20:11.053708  260527 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:20:11.053892  260527 config.go:182] Loaded profile config "embed-certs-491677": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:11.053909  260527 machine.go:97] duration metric: took 3.67941396s to provisionDockerMachine
	I1122 00:20:11.053917  260527 client.go:176] duration metric: took 9.342299036s to LocalClient.Create
	I1122 00:20:11.053943  260527 start.go:167] duration metric: took 9.342388491s to libmachine.API.Create "embed-certs-491677"
	I1122 00:20:11.053956  260527 start.go:293] postStartSetup for "embed-certs-491677" (driver="docker")
	I1122 00:20:11.053984  260527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:20:11.054052  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:20:11.054103  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.073167  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:11.168158  260527 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:20:11.172076  260527 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:20:11.172422  260527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:20:11.172459  260527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/addons for local assets ...
	I1122 00:20:11.172556  260527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/files for local assets ...
	I1122 00:20:11.172675  260527 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem -> 145302.pem in /etc/ssl/certs
	I1122 00:20:11.172811  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:20:11.182207  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:20:11.203784  260527 start.go:296] duration metric: took 149.811059ms for postStartSetup
	I1122 00:20:11.204173  260527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-491677
	I1122 00:20:11.222954  260527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/config.json ...
	I1122 00:20:11.223305  260527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:20:11.223354  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.242018  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:11.333726  260527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:20:11.338527  260527 start.go:128] duration metric: took 9.62936097s to createHost
	I1122 00:20:11.338558  260527 start.go:83] releasing machines lock for "embed-certs-491677", held for 9.629502399s
	I1122 00:20:11.338631  260527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-491677
	I1122 00:20:11.357563  260527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:20:11.357634  260527 ssh_runner.go:195] Run: cat /version.json
	I1122 00:20:11.357684  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.357690  260527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-491677
	I1122 00:20:11.377098  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:11.378067  260527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/embed-certs-491677/id_rsa Username:docker}
	I1122 00:20:08.727161  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:08.727652  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:08.727710  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:08.727762  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:08.754498  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:08.754522  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:08.754527  218693 cri.go:89] found id: ""
	I1122 00:20:08.754535  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:08.754583  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.758867  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.762449  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:08.762501  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:08.788422  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:08.788444  218693 cri.go:89] found id: ""
	I1122 00:20:08.788455  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:08.788512  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.792603  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:08.792668  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:08.820677  218693 cri.go:89] found id: ""
	I1122 00:20:08.820703  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.820711  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:08.820717  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:08.820769  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:08.848396  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:08.848418  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:08.848422  218693 cri.go:89] found id: ""
	I1122 00:20:08.848429  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:08.848485  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.852633  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.856393  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:08.856469  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:08.884423  218693 cri.go:89] found id: ""
	I1122 00:20:08.884454  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.884467  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:08.884476  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:08.884529  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:08.911898  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:08.911917  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:08.911921  218693 cri.go:89] found id: ""
	I1122 00:20:08.911928  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:08.912000  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.916097  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:08.919808  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:08.919868  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:08.945704  218693 cri.go:89] found id: ""
	I1122 00:20:08.945731  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.945742  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:08.945750  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:08.945811  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:08.971599  218693 cri.go:89] found id: ""
	I1122 00:20:08.971630  218693 logs.go:282] 0 containers: []
	W1122 00:20:08.971642  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:08.971658  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:08.971686  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:08.985779  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:08.985806  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:09.018373  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:09.018407  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:09.055328  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:09.055359  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:09.098567  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:09.098608  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:09.183392  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:09.183433  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:09.242636  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:09.242654  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:09.242666  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:09.276133  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:09.276179  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:09.310731  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:09.310769  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:09.362187  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:09.362226  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:09.391737  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:09.391763  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:09.425753  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:09.425787  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:11.959328  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:11.959805  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:11.959868  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:11.959935  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:11.993113  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:11.993137  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:11.993143  218693 cri.go:89] found id: ""
	I1122 00:20:11.993153  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:11.993213  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:11.997946  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.002616  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:12.002741  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:12.040113  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:12.040150  218693 cri.go:89] found id: ""
	I1122 00:20:12.040160  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:12.040220  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.045665  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:12.045732  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:12.081343  218693 cri.go:89] found id: ""
	I1122 00:20:12.081375  218693 logs.go:282] 0 containers: []
	W1122 00:20:12.081384  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:12.081389  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:12.081449  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:12.116486  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:12.117024  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:12.117045  218693 cri.go:89] found id: ""
	I1122 00:20:12.117055  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:12.117115  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.121469  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.125453  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:12.125520  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:12.159076  218693 cri.go:89] found id: ""
	I1122 00:20:12.159108  218693 logs.go:282] 0 containers: []
	W1122 00:20:12.159121  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:12.159130  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:12.159191  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:11.523900  260527 ssh_runner.go:195] Run: systemctl --version
	I1122 00:20:11.531084  260527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:20:11.536010  260527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:20:11.536130  260527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:20:11.563766  260527 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:20:11.563792  260527 start.go:496] detecting cgroup driver to use...
	I1122 00:20:11.563830  260527 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:20:11.563873  260527 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:20:11.579543  260527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:20:11.593598  260527 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:20:11.593666  260527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:20:11.610889  260527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:20:11.629723  260527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:20:11.730670  260527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:20:11.819921  260527 docker.go:234] disabling docker service ...
	I1122 00:20:11.819985  260527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:20:11.839159  260527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:20:11.854142  260527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:20:11.943699  260527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:20:12.053855  260527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:20:12.073171  260527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:20:12.089999  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:20:12.105012  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:20:12.117591  260527 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1122 00:20:12.117652  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1122 00:20:12.128817  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:20:12.142147  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:20:12.154635  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:20:12.169029  260527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:20:12.181631  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:20:12.194568  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:20:12.207294  260527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:20:12.218684  260527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:20:12.228679  260527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:20:12.241707  260527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:20:12.337447  260527 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:20:12.443801  260527 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:20:12.443870  260527 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:20:12.448114  260527 start.go:564] Will wait 60s for crictl version
	I1122 00:20:12.448178  260527 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.452113  260527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:20:12.481619  260527 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:20:12.481687  260527 ssh_runner.go:195] Run: containerd --version
	I1122 00:20:12.506954  260527 ssh_runner.go:195] Run: containerd --version
	I1122 00:20:12.537127  260527 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1122 00:20:08.528688  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	W1122 00:20:10.529626  251199 node_ready.go:57] node "no-preload-781232" has "Ready":"False" status (will retry)
	I1122 00:20:12.029744  251199 node_ready.go:49] node "no-preload-781232" is "Ready"
	I1122 00:20:12.029782  251199 node_ready.go:38] duration metric: took 14.503754974s for node "no-preload-781232" to be "Ready" ...
	I1122 00:20:12.029799  251199 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:12.029867  251199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:12.049755  251199 api_server.go:72] duration metric: took 14.826557708s to wait for apiserver process to appear ...
	I1122 00:20:12.049782  251199 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:12.049803  251199 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1122 00:20:12.055733  251199 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1122 00:20:12.057374  251199 api_server.go:141] control plane version: v1.34.1
	I1122 00:20:12.057405  251199 api_server.go:131] duration metric: took 7.61544ms to wait for apiserver health ...
	I1122 00:20:12.057416  251199 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:20:12.062154  251199 system_pods.go:59] 8 kube-system pods found
	I1122 00:20:12.062190  251199 system_pods.go:61] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:12.062199  251199 system_pods.go:61] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.062207  251199 system_pods.go:61] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.062212  251199 system_pods.go:61] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.062218  251199 system_pods.go:61] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.062223  251199 system_pods.go:61] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.062228  251199 system_pods.go:61] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.062237  251199 system_pods.go:61] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:12.062245  251199 system_pods.go:74] duration metric: took 4.821603ms to wait for pod list to return data ...
	I1122 00:20:12.062254  251199 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:20:12.065112  251199 default_sa.go:45] found service account: "default"
	I1122 00:20:12.065138  251199 default_sa.go:55] duration metric: took 2.848928ms for default service account to be created ...
	I1122 00:20:12.065149  251199 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:20:12.069582  251199 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:12.069625  251199 system_pods.go:89] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:12.069633  251199 system_pods.go:89] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.069648  251199 system_pods.go:89] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.069655  251199 system_pods.go:89] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.069661  251199 system_pods.go:89] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.069666  251199 system_pods.go:89] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.069670  251199 system_pods.go:89] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.069676  251199 system_pods.go:89] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:12.069728  251199 retry.go:31] will retry after 227.269849ms: missing components: kube-dns
	I1122 00:20:12.301834  251199 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:12.301869  251199 system_pods.go:89] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:12.301877  251199 system_pods.go:89] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.301886  251199 system_pods.go:89] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.301892  251199 system_pods.go:89] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.301898  251199 system_pods.go:89] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.301903  251199 system_pods.go:89] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.301910  251199 system_pods.go:89] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.301917  251199 system_pods.go:89] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:12.301938  251199 retry.go:31] will retry after 387.887736ms: missing components: kube-dns
	I1122 00:20:12.694992  251199 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:12.695026  251199 system_pods.go:89] "coredns-66bc5c9577-9wcct" [67b97cc5-016b-44d1-8119-dd6aa4932f83] Running
	I1122 00:20:12.695035  251199 system_pods.go:89] "etcd-no-preload-781232" [85c9627b-3102-439d-83e4-9ee3353591c1] Running
	I1122 00:20:12.695041  251199 system_pods.go:89] "kindnet-llcnc" [fcdd9f25-4804-47c2-8f09-b6a2d688a8bc] Running
	I1122 00:20:12.695047  251199 system_pods.go:89] "kube-apiserver-no-preload-781232" [4a4b5bf8-8262-46c5-9aa8-5a0bb0af364c] Running
	I1122 00:20:12.695052  251199 system_pods.go:89] "kube-controller-manager-no-preload-781232" [0c4fed80-9ce3-4b0d-99dd-ae11fc92104e] Running
	I1122 00:20:12.695060  251199 system_pods.go:89] "kube-proxy-685jg" [33a2d2c1-e364-4ec8-a9a0-69ba9146625f] Running
	I1122 00:20:12.695065  251199 system_pods.go:89] "kube-scheduler-no-preload-781232" [ec2ea83e-6638-4945-b4e4-ef3142f30481] Running
	I1122 00:20:12.695070  251199 system_pods.go:89] "storage-provisioner" [904bdf70-7728-45c5-a9ae-487aed28e6fc] Running
	I1122 00:20:12.695080  251199 system_pods.go:126] duration metric: took 629.924123ms to wait for k8s-apps to be running ...
	I1122 00:20:12.695093  251199 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:20:12.695144  251199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:12.708823  251199 system_svc.go:56] duration metric: took 13.721013ms WaitForService to wait for kubelet
	I1122 00:20:12.708855  251199 kubeadm.go:587] duration metric: took 15.485663176s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:12.708874  251199 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:20:12.712345  251199 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:20:12.712376  251199 node_conditions.go:123] node cpu capacity is 8
	I1122 00:20:12.712396  251199 node_conditions.go:105] duration metric: took 3.516354ms to run NodePressure ...
	I1122 00:20:12.712412  251199 start.go:242] waiting for startup goroutines ...
	I1122 00:20:12.712423  251199 start.go:247] waiting for cluster config update ...
	I1122 00:20:12.712441  251199 start.go:256] writing updated cluster config ...
	I1122 00:20:12.712733  251199 ssh_runner.go:195] Run: rm -f paused
	I1122 00:20:12.717390  251199 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:12.721696  251199 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9wcct" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.726947  251199 pod_ready.go:94] pod "coredns-66bc5c9577-9wcct" is "Ready"
	I1122 00:20:12.726976  251199 pod_ready.go:86] duration metric: took 5.255643ms for pod "coredns-66bc5c9577-9wcct" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.729559  251199 pod_ready.go:83] waiting for pod "etcd-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.734425  251199 pod_ready.go:94] pod "etcd-no-preload-781232" is "Ready"
	I1122 00:20:12.734455  251199 pod_ready.go:86] duration metric: took 4.86467ms for pod "etcd-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.736916  251199 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.741485  251199 pod_ready.go:94] pod "kube-apiserver-no-preload-781232" is "Ready"
	I1122 00:20:12.741515  251199 pod_ready.go:86] duration metric: took 4.574913ms for pod "kube-apiserver-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:12.743848  251199 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:13.121924  251199 pod_ready.go:94] pod "kube-controller-manager-no-preload-781232" is "Ready"
	I1122 00:20:13.121957  251199 pod_ready.go:86] duration metric: took 378.084436ms for pod "kube-controller-manager-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:13.322463  251199 pod_ready.go:83] waiting for pod "kube-proxy-685jg" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:13.721973  251199 pod_ready.go:94] pod "kube-proxy-685jg" is "Ready"
	I1122 00:20:13.722003  251199 pod_ready.go:86] duration metric: took 399.513258ms for pod "kube-proxy-685jg" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:13.922497  251199 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:14.322798  251199 pod_ready.go:94] pod "kube-scheduler-no-preload-781232" is "Ready"
	I1122 00:20:14.322835  251199 pod_ready.go:86] duration metric: took 400.307889ms for pod "kube-scheduler-no-preload-781232" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:14.322851  251199 pod_ready.go:40] duration metric: took 1.605427799s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:14.392629  251199 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:20:14.394856  251199 out.go:179] * Done! kubectl is now configured to use "no-preload-781232" cluster and "default" namespace by default
	I1122 00:20:12.541500  260527 cli_runner.go:164] Run: docker network inspect embed-certs-491677 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:20:12.574015  260527 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:20:12.578297  260527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:20:12.589491  260527 kubeadm.go:884] updating cluster {Name:embed-certs-491677 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:20:12.589632  260527 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:12.589697  260527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:20:12.617010  260527 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:20:12.617037  260527 containerd.go:534] Images already preloaded, skipping extraction
	I1122 00:20:12.617098  260527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:20:12.644125  260527 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:20:12.644148  260527 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:20:12.644157  260527 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1122 00:20:12.644310  260527 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-491677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:20:12.644388  260527 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:20:12.673869  260527 cni.go:84] Creating CNI manager for ""
	I1122 00:20:12.673899  260527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:20:12.673919  260527 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:20:12.673948  260527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-491677 NodeName:embed-certs-491677 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:20:12.674142  260527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-491677"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:20:12.674219  260527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:20:12.683635  260527 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:20:12.683710  260527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:20:12.692341  260527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1122 00:20:12.708136  260527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:20:12.727111  260527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1122 00:20:12.743788  260527 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:20:12.747754  260527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:20:12.758812  260527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:20:12.844867  260527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:20:12.869740  260527 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677 for IP: 192.168.85.2
	I1122 00:20:12.869763  260527 certs.go:195] generating shared ca certs ...
	I1122 00:20:12.869790  260527 certs.go:227] acquiring lock for ca certs: {Name:mkcee17f48cab2703d4de8a78a6fb8af44d9e7e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:12.869989  260527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.key
	I1122 00:20:12.870065  260527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.key
	I1122 00:20:12.870084  260527 certs.go:257] generating profile certs ...
	I1122 00:20:12.870146  260527 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/client.key
	I1122 00:20:12.870166  260527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/client.crt with IP's: []
	I1122 00:20:12.908186  260527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/client.crt ...
	I1122 00:20:12.908216  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/client.crt: {Name:mk8704ecde753d7119b44ed45cfda92e5dc05630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:12.908420  260527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/client.key ...
	I1122 00:20:12.908436  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/client.key: {Name:mkb2d6bf770bf45b16a4eca78c32fdcff2885211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:12.908547  260527 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.key.c79253ad
	I1122 00:20:12.908570  260527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.crt.c79253ad with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1122 00:20:13.019354  260527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.crt.c79253ad ...
	I1122 00:20:13.019392  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.crt.c79253ad: {Name:mk1762d9d01731b3cbac46975805ab095bb2b8bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:13.019599  260527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.key.c79253ad ...
	I1122 00:20:13.019618  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.key.c79253ad: {Name:mk75d2f3b968084584154e473183ab1de1ddfdef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:13.019739  260527 certs.go:382] copying /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.crt.c79253ad -> /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.crt
	I1122 00:20:13.019842  260527 certs.go:386] copying /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.key.c79253ad -> /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.key
	I1122 00:20:13.019938  260527 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.key
	I1122 00:20:13.019956  260527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.crt with IP's: []
	I1122 00:20:13.050653  260527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.crt ...
	I1122 00:20:13.050681  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.crt: {Name:mka897102b38131787dec19ca98371262dbbfbff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:13.050873  260527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.key ...
	I1122 00:20:13.050902  260527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.key: {Name:mk16d21cb9e06711fe89c5e2d2bb5e78642dddf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:13.051132  260527 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530.pem (1338 bytes)
	W1122 00:20:13.051181  260527 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530_empty.pem, impossibly tiny 0 bytes
	I1122 00:20:13.051197  260527 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem (1675 bytes)
	I1122 00:20:13.051233  260527 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem (1082 bytes)
	I1122 00:20:13.051277  260527 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:20:13.051314  260527 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem (1679 bytes)
	I1122 00:20:13.051374  260527 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:20:13.051960  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:20:13.070735  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:20:13.090249  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:20:13.108597  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1122 00:20:13.128028  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1122 00:20:13.147582  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:20:13.165509  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:20:13.183679  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/embed-certs-491677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:20:13.202761  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:20:13.225144  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530.pem --> /usr/share/ca-certificates/14530.pem (1338 bytes)
	I1122 00:20:13.243856  260527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /usr/share/ca-certificates/145302.pem (1708 bytes)
	I1122 00:20:13.263152  260527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:20:13.277050  260527 ssh_runner.go:195] Run: openssl version
	I1122 00:20:13.283706  260527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:20:13.294163  260527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:20:13.298425  260527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:20:13.298493  260527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:20:13.335091  260527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:20:13.344437  260527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14530.pem && ln -fs /usr/share/ca-certificates/14530.pem /etc/ssl/certs/14530.pem"
	I1122 00:20:13.354368  260527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14530.pem
	I1122 00:20:13.358613  260527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14530.pem
	I1122 00:20:13.358673  260527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14530.pem
	I1122 00:20:13.393614  260527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14530.pem /etc/ssl/certs/51391683.0"
	I1122 00:20:13.403768  260527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145302.pem && ln -fs /usr/share/ca-certificates/145302.pem /etc/ssl/certs/145302.pem"
	I1122 00:20:13.412603  260527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145302.pem
	I1122 00:20:13.416857  260527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145302.pem
	I1122 00:20:13.416924  260527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145302.pem
	I1122 00:20:13.454565  260527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145302.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:20:13.464818  260527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:20:13.468886  260527 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:20:13.468942  260527 kubeadm.go:401] StartCluster: {Name:embed-certs-491677 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-491677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:20:13.469046  260527 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:20:13.469089  260527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:20:13.496549  260527 cri.go:89] found id: ""
	I1122 00:20:13.496613  260527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:20:13.505745  260527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:20:13.515197  260527 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:20:13.515253  260527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:20:13.524576  260527 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:20:13.524596  260527 kubeadm.go:158] found existing configuration files:
	
	I1122 00:20:13.524646  260527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:20:13.533544  260527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:20:13.533603  260527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:20:13.542351  260527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:20:13.552273  260527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:20:13.552347  260527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:20:13.562028  260527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:20:13.571876  260527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:20:13.571926  260527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:20:13.582394  260527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:20:13.591183  260527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:20:13.591246  260527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:20:13.600121  260527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:20:13.660570  260527 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1122 00:20:13.719464  260527 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:20:12.194554  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:12.194580  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:12.194586  218693 cri.go:89] found id: ""
	I1122 00:20:12.194597  218693 logs.go:282] 2 containers: [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:12.194653  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.200688  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:12.205547  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:12.205617  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:12.243134  218693 cri.go:89] found id: ""
	I1122 00:20:12.243161  218693 logs.go:282] 0 containers: []
	W1122 00:20:12.243171  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:12.243181  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:12.243239  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:12.271092  218693 cri.go:89] found id: ""
	I1122 00:20:12.271125  218693 logs.go:282] 0 containers: []
	W1122 00:20:12.271137  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:12.271149  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:12.271168  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:12.310696  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:12.310725  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:12.367453  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:12.367497  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:12.401777  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:12.401820  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:12.437519  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:12.437557  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:12.543639  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:12.543674  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:12.570582  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:12.570613  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:12.633684  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:12.633704  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:12.633716  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:12.667421  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:12.667454  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:12.703894  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:12.703924  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:12.736729  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:12.736764  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:12.771593  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:12.771626  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:15.325334  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:15.325674  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:15.325737  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:15.325785  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:15.360483  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:15.360505  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:15.360519  218693 cri.go:89] found id: ""
	I1122 00:20:15.360536  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:15.360596  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.365000  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.369124  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:15.369192  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:15.400520  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:15.400545  218693 cri.go:89] found id: ""
	I1122 00:20:15.400556  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:15.400615  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.405111  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:15.405188  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:15.440253  218693 cri.go:89] found id: ""
	I1122 00:20:15.440297  218693 logs.go:282] 0 containers: []
	W1122 00:20:15.440308  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:15.440317  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:15.440381  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:15.475042  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:15.475067  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:15.475073  218693 cri.go:89] found id: ""
	I1122 00:20:15.475082  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:15.475143  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.479941  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.484606  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:15.484676  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:15.518209  218693 cri.go:89] found id: ""
	I1122 00:20:15.518299  218693 logs.go:282] 0 containers: []
	W1122 00:20:15.518314  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:15.518323  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:15.518397  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:15.549238  218693 cri.go:89] found id: "718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:15.549298  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:15.549306  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:15.549311  218693 cri.go:89] found id: ""
	I1122 00:20:15.549321  218693 logs.go:282] 3 containers: [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:15.549409  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.554575  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.559690  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:15.564140  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:15.564212  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:15.593979  218693 cri.go:89] found id: ""
	I1122 00:20:15.594001  218693 logs.go:282] 0 containers: []
	W1122 00:20:15.594009  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:15.594016  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:15.594076  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:15.621716  218693 cri.go:89] found id: ""
	I1122 00:20:15.621740  218693 logs.go:282] 0 containers: []
	W1122 00:20:15.621751  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:15.621763  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:15.621777  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:15.635879  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:15.635908  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:15.700277  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:15.700302  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:15.700368  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:15.744118  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:15.744151  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:15.804869  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:15.804914  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:15.852799  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:15.852837  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:15.886163  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:15.886199  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:15.922695  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:15.922727  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:15.974295  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:15.974327  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:16.072397  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:16.072432  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:16.107409  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:16.107443  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:16.140406  218693 logs.go:123] Gathering logs for kube-controller-manager [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f] ...
	I1122 00:20:16.140442  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:16.176750  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:16.176792  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:18.717355  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:18.717807  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:18.717881  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:18.717941  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:18.769197  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:18.769228  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:18.769235  218693 cri.go:89] found id: ""
	I1122 00:20:18.769244  218693 logs.go:282] 2 containers: [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:18.769347  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.777815  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.783829  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:18.783910  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:18.824794  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:18.824816  218693 cri.go:89] found id: ""
	I1122 00:20:18.824826  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:18.824884  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.829608  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:18.829692  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:18.865927  218693 cri.go:89] found id: ""
	I1122 00:20:18.865964  218693 logs.go:282] 0 containers: []
	W1122 00:20:18.865977  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:18.865985  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:18.866042  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:18.899699  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:18.899718  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:18.899722  218693 cri.go:89] found id: ""
	I1122 00:20:18.899730  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:18.899775  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.904742  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.910347  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:18.910428  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:18.943667  218693 cri.go:89] found id: ""
	I1122 00:20:18.943693  218693 logs.go:282] 0 containers: []
	W1122 00:20:18.943702  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:18.943710  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:18.943775  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:18.979450  218693 cri.go:89] found id: "718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:18.979488  218693 cri.go:89] found id: "91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:18.979496  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:18.979502  218693 cri.go:89] found id: ""
	I1122 00:20:18.979512  218693 logs.go:282] 3 containers: [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:18.979585  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.984932  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.989393  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:18.996874  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:18.996940  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:19.045639  218693 cri.go:89] found id: ""
	I1122 00:20:19.045665  218693 logs.go:282] 0 containers: []
	W1122 00:20:19.045683  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:19.045691  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:19.045746  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:19.082793  218693 cri.go:89] found id: ""
	I1122 00:20:19.082818  218693 logs.go:282] 0 containers: []
	W1122 00:20:19.082832  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:19.082843  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:19.082857  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:19.202501  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:19.202545  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:19.221253  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:19.221346  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:19.311057  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:19.311138  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:19.311172  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:19.351947  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:19.351994  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:19.405038  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:19.405079  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:19.449168  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:19.449210  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:19.516475  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:19.516518  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:19.556284  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:19.556324  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:19.600214  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:19.600248  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:19.667408  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:19.667453  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:19.712773  218693 logs.go:123] Gathering logs for kube-controller-manager [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f] ...
	I1122 00:20:19.712809  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:19.747902  218693 logs.go:123] Gathering logs for kube-controller-manager [91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a] ...
	I1122 00:20:19.747943  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 91989ea7d1eb87264ea639688db06633fb66749e41f18e88a4bd9a185ac7a68a"
	I1122 00:20:24.013741  260527 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:20:24.013841  260527 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:20:24.013971  260527 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:20:24.014051  260527 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1122 00:20:24.014118  260527 kubeadm.go:319] OS: Linux
	I1122 00:20:24.014182  260527 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:20:24.014342  260527 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:20:24.014400  260527 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:20:24.014481  260527 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:20:24.014580  260527 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:20:24.014656  260527 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:20:24.014752  260527 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:20:24.014831  260527 kubeadm.go:319] CGROUPS_IO: enabled
	I1122 00:20:24.014932  260527 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:20:24.015087  260527 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:20:24.015224  260527 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:20:24.015326  260527 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:20:24.018013  260527 out.go:252]   - Generating certificates and keys ...
	I1122 00:20:24.018127  260527 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:20:24.018237  260527 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:20:24.018376  260527 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:20:24.018448  260527 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:20:24.018509  260527 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:20:24.018566  260527 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:20:24.018652  260527 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:20:24.018800  260527 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-491677 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:20:24.018874  260527 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:20:24.019069  260527 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-491677 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:20:24.019133  260527 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:20:24.019192  260527 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:20:24.019236  260527 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:20:24.019319  260527 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:20:24.019387  260527 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:20:24.019472  260527 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:20:24.019550  260527 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:20:24.019653  260527 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:20:24.019755  260527 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:20:24.019900  260527 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:20:24.020006  260527 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:20:24.021383  260527 out.go:252]   - Booting up control plane ...
	I1122 00:20:24.021498  260527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:20:24.021574  260527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:20:24.021685  260527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:20:24.021840  260527 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:20:24.022055  260527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:20:24.022224  260527 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:20:24.022409  260527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:20:24.022482  260527 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:20:24.022688  260527 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:20:24.022859  260527 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:20:24.022943  260527 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.48157ms
	I1122 00:20:24.023076  260527 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:20:24.023215  260527 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1122 00:20:24.023334  260527 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:20:24.023413  260527 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1122 00:20:24.023496  260527 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.022807436s
	I1122 00:20:24.023563  260527 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.506727027s
	I1122 00:20:24.023625  260527 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501637409s
	I1122 00:20:24.023715  260527 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:20:24.023826  260527 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:20:24.023880  260527 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:20:24.024111  260527 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-491677 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:20:24.024209  260527 kubeadm.go:319] [bootstrap-token] Using token: zuydkb.uvh9448kov8j9p0k
	I1122 00:20:24.026466  260527 out.go:252]   - Configuring RBAC rules ...
	I1122 00:20:24.026583  260527 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:20:24.026681  260527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:20:24.026862  260527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:20:24.027045  260527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:20:24.027192  260527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:20:24.027307  260527 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:20:24.027453  260527 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:20:24.027507  260527 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:20:24.027586  260527 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:20:24.027594  260527 kubeadm.go:319] 
	I1122 00:20:24.027679  260527 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:20:24.027687  260527 kubeadm.go:319] 
	I1122 00:20:24.027780  260527 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:20:24.027788  260527 kubeadm.go:319] 
	I1122 00:20:24.027832  260527 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:20:24.028013  260527 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:20:24.028100  260527 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:20:24.028108  260527 kubeadm.go:319] 
	I1122 00:20:24.028209  260527 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:20:24.028222  260527 kubeadm.go:319] 
	I1122 00:20:24.028290  260527 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:20:24.028300  260527 kubeadm.go:319] 
	I1122 00:20:24.028367  260527 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:20:24.028476  260527 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:20:24.028653  260527 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:20:24.028671  260527 kubeadm.go:319] 
	I1122 00:20:24.028801  260527 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:20:24.028946  260527 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:20:24.028964  260527 kubeadm.go:319] 
	I1122 00:20:24.029080  260527 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zuydkb.uvh9448kov8j9p0k \
	I1122 00:20:24.029247  260527 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2af5fc9ecf777c709212eeb70ba373979920cc452e3ef3a8f29babe0281d5739 \
	I1122 00:20:24.029294  260527 kubeadm.go:319] 	--control-plane 
	I1122 00:20:24.029301  260527 kubeadm.go:319] 
	I1122 00:20:24.029452  260527 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:20:24.029466  260527 kubeadm.go:319] 
	I1122 00:20:24.029655  260527 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zuydkb.uvh9448kov8j9p0k \
	I1122 00:20:24.029832  260527 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2af5fc9ecf777c709212eeb70ba373979920cc452e3ef3a8f29babe0281d5739 
	I1122 00:20:24.029849  260527 cni.go:84] Creating CNI manager for ""
	I1122 00:20:24.029857  260527 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:20:24.031762  260527 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1be7176c234f3       56cc512116c8f       9 seconds ago       Running             busybox                   0                   1564f6b28ec4d       busybox                                     default
	b61337c7649d1       52546a367cc9e       14 seconds ago      Running             coredns                   0                   40840a536016c       coredns-66bc5c9577-9wcct                    kube-system
	a8df28ee53bb6       6e38f40d628db       14 seconds ago      Running             storage-provisioner       0                   a70c94b4c1943       storage-provisioner                         kube-system
	304e6535bf7be       409467f978b4a       25 seconds ago      Running             kindnet-cni               0                   068dfc53e6eb8       kindnet-llcnc                               kube-system
	2b0f0e4e1df6d       fc25172553d79       29 seconds ago      Running             kube-proxy                0                   85fd4cd4e5d99       kube-proxy-685jg                            kube-system
	13c5477f80d07       c80c8dbafe7dd       40 seconds ago      Running             kube-controller-manager   0                   b6ae800cc9296       kube-controller-manager-no-preload-781232   kube-system
	6b02e9e9a0792       7dd6aaa1717ab       40 seconds ago      Running             kube-scheduler            0                   3af2c78e96fc1       kube-scheduler-no-preload-781232            kube-system
	7f1227117afb1       c3994bc696102       40 seconds ago      Running             kube-apiserver            0                   be95c3994ed3e       kube-apiserver-no-preload-781232            kube-system
	190bb0852270a       5f1f5298c888d       40 seconds ago      Running             etcd                      0                   3f1e015b9de63       etcd-no-preload-781232                      kube-system
	
	
	==> containerd <==
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.125026462Z" level=info msg="Container b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.128227043Z" level=info msg="CreateContainer within sandbox \"a70c94b4c1943564b88b616b626e0c720041932bf4d08a29afacedb7821e49d6\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"a8df28ee53bb60379874726c9a896717f75e12fd13a7316e60ad11da58feca4a\""
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.128922999Z" level=info msg="StartContainer for \"a8df28ee53bb60379874726c9a896717f75e12fd13a7316e60ad11da58feca4a\""
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.130977320Z" level=info msg="connecting to shim a8df28ee53bb60379874726c9a896717f75e12fd13a7316e60ad11da58feca4a" address="unix:///run/containerd/s/a41072d8e56c0c4fd852fc058c033ea42aa1d30a23fb4a4e2d21bc0cf055ef17" protocol=ttrpc version=3
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.135197186Z" level=info msg="CreateContainer within sandbox \"40840a536016c7c55af754ac43b03e221f1e60e49a2788ad5f3cf727dfb8737b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be\""
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.135926316Z" level=info msg="StartContainer for \"b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be\""
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.137158693Z" level=info msg="connecting to shim b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be" address="unix:///run/containerd/s/b70595bb46ca14d96b4daefe8d0b2298a7d6dc2f56420769b86ef6dc7df0b4d8" protocol=ttrpc version=3
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.191141980Z" level=info msg="StartContainer for \"a8df28ee53bb60379874726c9a896717f75e12fd13a7316e60ad11da58feca4a\" returns successfully"
	Nov 22 00:20:12 no-preload-781232 containerd[659]: time="2025-11-22T00:20:12.196884971Z" level=info msg="StartContainer for \"b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be\" returns successfully"
	Nov 22 00:20:14 no-preload-781232 containerd[659]: time="2025-11-22T00:20:14.895991422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:c9470f46-fa0e-479c-82bc-857ad36201bf,Namespace:default,Attempt:0,}"
	Nov 22 00:20:14 no-preload-781232 containerd[659]: time="2025-11-22T00:20:14.941640673Z" level=info msg="connecting to shim 1564f6b28ec4dff922fb583de118d245a6ac03f32306a9cc980e0038aecbf0a8" address="unix:///run/containerd/s/b2df10ad6ace202b32f7a35c18d5e2dd63a4edfdd3c65601dfc1d680d40dd139" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:20:15 no-preload-781232 containerd[659]: time="2025-11-22T00:20:15.018828312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:c9470f46-fa0e-479c-82bc-857ad36201bf,Namespace:default,Attempt:0,} returns sandbox id \"1564f6b28ec4dff922fb583de118d245a6ac03f32306a9cc980e0038aecbf0a8\""
	Nov 22 00:20:15 no-preload-781232 containerd[659]: time="2025-11-22T00:20:15.021209974Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.219026390Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.219980963Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396644"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.221627420Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.224055902Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.224595592Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.203346709s"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.224633234Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.230110579Z" level=info msg="CreateContainer within sandbox \"1564f6b28ec4dff922fb583de118d245a6ac03f32306a9cc980e0038aecbf0a8\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.240493581Z" level=info msg="Container 1be7176c234f3a80674fd6f9b54181ed294ceb48ab785db551e1cf298de28067: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.247404428Z" level=info msg="CreateContainer within sandbox \"1564f6b28ec4dff922fb583de118d245a6ac03f32306a9cc980e0038aecbf0a8\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"1be7176c234f3a80674fd6f9b54181ed294ceb48ab785db551e1cf298de28067\""
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.248085899Z" level=info msg="StartContainer for \"1be7176c234f3a80674fd6f9b54181ed294ceb48ab785db551e1cf298de28067\""
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.249115545Z" level=info msg="connecting to shim 1be7176c234f3a80674fd6f9b54181ed294ceb48ab785db551e1cf298de28067" address="unix:///run/containerd/s/b2df10ad6ace202b32f7a35c18d5e2dd63a4edfdd3c65601dfc1d680d40dd139" protocol=ttrpc version=3
	Nov 22 00:20:17 no-preload-781232 containerd[659]: time="2025-11-22T00:20:17.315789801Z" level=info msg="StartContainer for \"1be7176c234f3a80674fd6f9b54181ed294ceb48ab785db551e1cf298de28067\" returns successfully"
	
	
	==> coredns [b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48969 - 46525 "HINFO IN 6647442209668263628.3620737544070114. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.024232555s
	
	
	==> describe nodes <==
	Name:               no-preload-781232
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-781232
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=no-preload-781232
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_19_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:19:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-781232
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:20:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:20:21 +0000   Sat, 22 Nov 2025 00:19:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:20:21 +0000   Sat, 22 Nov 2025 00:19:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:20:21 +0000   Sat, 22 Nov 2025 00:19:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:20:21 +0000   Sat, 22 Nov 2025 00:20:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-781232
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                34f9a952-9825-419d-98a4-5c9d048a8949
	  Boot ID:                    725aae03-f893-4e0b-b029-cbd3b00ccfdd
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-9wcct                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-no-preload-781232                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-llcnc                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-781232             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-781232    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-685jg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-781232             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 35s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  35s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  35s   kubelet          Node no-preload-781232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s   kubelet          Node no-preload-781232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s   kubelet          Node no-preload-781232 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node no-preload-781232 event: Registered Node no-preload-781232 in Controller
	  Normal  NodeReady                15s   kubelet          Node no-preload-781232 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000865] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.410276] i8042: Warning: Keylock active
	[  +0.014947] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.495836] block sda: the capability attribute has been deprecated.
	[  +0.091740] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024333] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.452540] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [190bb0852270abcf17fda286c6be5e9fcb36eb2b98dcf07cf71fa2985c5db26b] <==
	{"level":"warn","ts":"2025-11-22T00:19:48.020496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.029344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.036578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.044633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.052341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.059700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.067305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.075178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.083428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.091252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.098111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.105126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.115320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.122949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.130869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.138077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.153643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.158220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.165369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.172568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.187849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.195427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.203497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:19:48.251107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:06.065444Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.807983ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766331608303053 > lease_revoke:<id:5b339aa8ee6283fb>","response":"size:29"}
	
	
	==> kernel <==
	 00:20:26 up  1:02,  0 user,  load average: 5.71, 3.68, 2.28
	Linux no-preload-781232 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [304e6535bf7bedf2a516b8d232b19d3e038abaca4c8c450355eade98b387f580] <==
	I1122 00:20:01.282827       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:20:01.283115       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1122 00:20:01.285363       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:20:01.285391       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:20:01.285415       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:20:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:20:01.579338       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:20:01.579365       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:20:01.579401       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:20:01.579930       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:20:02.079584       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:20:02.079629       1 metrics.go:72] Registering metrics
	I1122 00:20:02.079738       1 controller.go:711] "Syncing nftables rules"
	I1122 00:20:11.580110       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:20:11.580197       1 main.go:301] handling current node
	I1122 00:20:21.579658       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1122 00:20:21.579692       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7f1227117afb11933863eec6c929a38cd5f7c89c181f267ac92151e7d68ac0bb] <==
	E1122 00:19:48.903533       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1122 00:19:48.948724       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:19:48.966774       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:19:48.966780       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:19:48.973236       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:19:48.974951       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:19:49.082403       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:19:49.751231       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:19:49.755074       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:19:49.755097       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:19:50.385719       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:19:50.430829       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:19:50.558323       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:19:50.566618       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1122 00:19:50.567866       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:19:50.572538       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:19:50.989615       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:19:51.569130       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:19:51.579846       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:19:51.587753       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:19:56.392066       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:19:56.397169       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:19:57.040752       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:19:57.089341       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1122 00:20:23.692415       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:44986: use of closed network connection
	
	
	==> kube-controller-manager [13c5477f80d07937f3038c381810143f379c1a5724ad58b9f212e7d95e199ef6] <==
	I1122 00:19:55.943773       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:19:55.948143       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-781232" podCIDRs=["10.244.0.0/24"]
	I1122 00:19:55.951028       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1122 00:19:55.952216       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:19:55.958685       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:19:55.967211       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:19:55.969502       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:19:55.985506       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:19:55.987778       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:19:55.987818       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:19:55.987843       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:19:55.987860       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:19:55.987891       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:19:55.987892       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:19:55.988056       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:19:55.988157       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:19:55.988196       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:19:55.988451       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:19:55.988570       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:19:55.993493       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:19:55.995762       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:19:55.999041       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:19:56.003456       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:19:56.009732       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:20:15.940020       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2b0f0e4e1df6d003c1fd5d63a2d88caf527a5828be1e719b714f70bf70e013e6] <==
	I1122 00:19:57.745181       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:19:57.820374       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:19:57.920741       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:19:57.920805       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1122 00:19:57.920908       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:19:57.944005       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:19:57.944068       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:19:57.949691       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:19:57.950216       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:19:57.950247       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:19:57.951713       1 config.go:200] "Starting service config controller"
	I1122 00:19:57.951744       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:19:57.952068       1 config.go:309] "Starting node config controller"
	I1122 00:19:57.952079       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:19:57.952087       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:19:57.952127       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:19:57.952133       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:19:57.952152       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:19:57.952157       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:19:58.052730       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:19:58.052758       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:19:58.052792       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6b02e9e9a07928c42cf1e5bb58d45de4ce420454640d91b3f098f98aa2f59ca6] <==
	E1122 00:19:49.252868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1122 00:19:49.252981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:19:49.253033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:19:49.253095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:19:49.253096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:19:49.253177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:19:49.253195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:19:49.253304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:19:49.253822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:19:49.254123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:19:49.254316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:19:49.254409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:19:49.254603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:19:49.255138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:19:49.255326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:19:49.255451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:19:49.255463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:19:49.255487       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:19:49.255552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:19:50.077997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:19:50.105397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:19:50.128752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:19:50.191530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:19:50.320610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1122 00:19:52.548275       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:19:52 no-preload-781232 kubelet[2185]: I1122 00:19:52.497026    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-781232" podStartSLOduration=1.4970037889999999 podStartE2EDuration="1.497003789s" podCreationTimestamp="2025-11-22 00:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:52.485725647 +0000 UTC m=+1.155699844" watchObservedRunningTime="2025-11-22 00:19:52.497003789 +0000 UTC m=+1.166977980"
	Nov 22 00:19:52 no-preload-781232 kubelet[2185]: I1122 00:19:52.507726    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-781232" podStartSLOduration=1.5077082979999998 podStartE2EDuration="1.507708298s" podCreationTimestamp="2025-11-22 00:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:52.497358614 +0000 UTC m=+1.167332813" watchObservedRunningTime="2025-11-22 00:19:52.507708298 +0000 UTC m=+1.177682496"
	Nov 22 00:19:52 no-preload-781232 kubelet[2185]: I1122 00:19:52.524221    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-781232" podStartSLOduration=1.524201804 podStartE2EDuration="1.524201804s" podCreationTimestamp="2025-11-22 00:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:52.523919549 +0000 UTC m=+1.193893746" watchObservedRunningTime="2025-11-22 00:19:52.524201804 +0000 UTC m=+1.194176001"
	Nov 22 00:19:52 no-preload-781232 kubelet[2185]: I1122 00:19:52.524428    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-781232" podStartSLOduration=2.5244149670000002 podStartE2EDuration="2.524414967s" podCreationTimestamp="2025-11-22 00:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:52.508124285 +0000 UTC m=+1.178098482" watchObservedRunningTime="2025-11-22 00:19:52.524414967 +0000 UTC m=+1.194389144"
	Nov 22 00:19:55 no-preload-781232 kubelet[2185]: I1122 00:19:55.977925    2185 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:19:55 no-preload-781232 kubelet[2185]: I1122 00:19:55.978713    2185 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141500    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33a2d2c1-e364-4ec8-a9a0-69ba9146625f-xtables-lock\") pod \"kube-proxy-685jg\" (UID: \"33a2d2c1-e364-4ec8-a9a0-69ba9146625f\") " pod="kube-system/kube-proxy-685jg"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141537    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33a2d2c1-e364-4ec8-a9a0-69ba9146625f-lib-modules\") pod \"kube-proxy-685jg\" (UID: \"33a2d2c1-e364-4ec8-a9a0-69ba9146625f\") " pod="kube-system/kube-proxy-685jg"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141556    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw28b\" (UniqueName: \"kubernetes.io/projected/33a2d2c1-e364-4ec8-a9a0-69ba9146625f-kube-api-access-zw28b\") pod \"kube-proxy-685jg\" (UID: \"33a2d2c1-e364-4ec8-a9a0-69ba9146625f\") " pod="kube-system/kube-proxy-685jg"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141576    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcdd9f25-4804-47c2-8f09-b6a2d688a8bc-xtables-lock\") pod \"kindnet-llcnc\" (UID: \"fcdd9f25-4804-47c2-8f09-b6a2d688a8bc\") " pod="kube-system/kindnet-llcnc"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141635    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcdd9f25-4804-47c2-8f09-b6a2d688a8bc-lib-modules\") pod \"kindnet-llcnc\" (UID: \"fcdd9f25-4804-47c2-8f09-b6a2d688a8bc\") " pod="kube-system/kindnet-llcnc"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141684    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgjjc\" (UniqueName: \"kubernetes.io/projected/fcdd9f25-4804-47c2-8f09-b6a2d688a8bc-kube-api-access-tgjjc\") pod \"kindnet-llcnc\" (UID: \"fcdd9f25-4804-47c2-8f09-b6a2d688a8bc\") " pod="kube-system/kindnet-llcnc"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141740    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fcdd9f25-4804-47c2-8f09-b6a2d688a8bc-cni-cfg\") pod \"kindnet-llcnc\" (UID: \"fcdd9f25-4804-47c2-8f09-b6a2d688a8bc\") " pod="kube-system/kindnet-llcnc"
	Nov 22 00:19:57 no-preload-781232 kubelet[2185]: I1122 00:19:57.141773    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33a2d2c1-e364-4ec8-a9a0-69ba9146625f-kube-proxy\") pod \"kube-proxy-685jg\" (UID: \"33a2d2c1-e364-4ec8-a9a0-69ba9146625f\") " pod="kube-system/kube-proxy-685jg"
	Nov 22 00:19:58 no-preload-781232 kubelet[2185]: I1122 00:19:58.475239    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-685jg" podStartSLOduration=1.475209261 podStartE2EDuration="1.475209261s" podCreationTimestamp="2025-11-22 00:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:19:58.474998555 +0000 UTC m=+7.144972752" watchObservedRunningTime="2025-11-22 00:19:58.475209261 +0000 UTC m=+7.145183457"
	Nov 22 00:20:01 no-preload-781232 kubelet[2185]: I1122 00:20:01.484255    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-llcnc" podStartSLOduration=1.318787825 podStartE2EDuration="4.484237697s" podCreationTimestamp="2025-11-22 00:19:57 +0000 UTC" firstStartedPulling="2025-11-22 00:19:57.782479973 +0000 UTC m=+6.452454162" lastFinishedPulling="2025-11-22 00:20:00.947929854 +0000 UTC m=+9.617904034" observedRunningTime="2025-11-22 00:20:01.484049069 +0000 UTC m=+10.154023264" watchObservedRunningTime="2025-11-22 00:20:01.484237697 +0000 UTC m=+10.154211885"
	Nov 22 00:20:11 no-preload-781232 kubelet[2185]: I1122 00:20:11.652649    2185 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:20:11 no-preload-781232 kubelet[2185]: I1122 00:20:11.739640    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/904bdf70-7728-45c5-a9ae-487aed28e6fc-tmp\") pod \"storage-provisioner\" (UID: \"904bdf70-7728-45c5-a9ae-487aed28e6fc\") " pod="kube-system/storage-provisioner"
	Nov 22 00:20:11 no-preload-781232 kubelet[2185]: I1122 00:20:11.739695    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjxz7\" (UniqueName: \"kubernetes.io/projected/904bdf70-7728-45c5-a9ae-487aed28e6fc-kube-api-access-xjxz7\") pod \"storage-provisioner\" (UID: \"904bdf70-7728-45c5-a9ae-487aed28e6fc\") " pod="kube-system/storage-provisioner"
	Nov 22 00:20:11 no-preload-781232 kubelet[2185]: I1122 00:20:11.739725    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67b97cc5-016b-44d1-8119-dd6aa4932f83-config-volume\") pod \"coredns-66bc5c9577-9wcct\" (UID: \"67b97cc5-016b-44d1-8119-dd6aa4932f83\") " pod="kube-system/coredns-66bc5c9577-9wcct"
	Nov 22 00:20:11 no-preload-781232 kubelet[2185]: I1122 00:20:11.739751    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkgkd\" (UniqueName: \"kubernetes.io/projected/67b97cc5-016b-44d1-8119-dd6aa4932f83-kube-api-access-tkgkd\") pod \"coredns-66bc5c9577-9wcct\" (UID: \"67b97cc5-016b-44d1-8119-dd6aa4932f83\") " pod="kube-system/coredns-66bc5c9577-9wcct"
	Nov 22 00:20:12 no-preload-781232 kubelet[2185]: I1122 00:20:12.528668    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9wcct" podStartSLOduration=15.528640775 podStartE2EDuration="15.528640775s" podCreationTimestamp="2025-11-22 00:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:12.513567573 +0000 UTC m=+21.183541794" watchObservedRunningTime="2025-11-22 00:20:12.528640775 +0000 UTC m=+21.198614973"
	Nov 22 00:20:14 no-preload-781232 kubelet[2185]: I1122 00:20:14.582118    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.582090447 podStartE2EDuration="17.582090447s" podCreationTimestamp="2025-11-22 00:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:12.544916262 +0000 UTC m=+21.214890459" watchObservedRunningTime="2025-11-22 00:20:14.582090447 +0000 UTC m=+23.252064644"
	Nov 22 00:20:14 no-preload-781232 kubelet[2185]: I1122 00:20:14.657660    2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcdq4\" (UniqueName: \"kubernetes.io/projected/c9470f46-fa0e-479c-82bc-857ad36201bf-kube-api-access-tcdq4\") pod \"busybox\" (UID: \"c9470f46-fa0e-479c-82bc-857ad36201bf\") " pod="default/busybox"
	Nov 22 00:20:17 no-preload-781232 kubelet[2185]: I1122 00:20:17.529121    2185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.3241123 podStartE2EDuration="3.529088108s" podCreationTimestamp="2025-11-22 00:20:14 +0000 UTC" firstStartedPulling="2025-11-22 00:20:15.020606103 +0000 UTC m=+23.690580283" lastFinishedPulling="2025-11-22 00:20:17.225581913 +0000 UTC m=+25.895556091" observedRunningTime="2025-11-22 00:20:17.528843748 +0000 UTC m=+26.198817946" watchObservedRunningTime="2025-11-22 00:20:17.529088108 +0000 UTC m=+26.199062305"
	
	
	==> storage-provisioner [a8df28ee53bb60379874726c9a896717f75e12fd13a7316e60ad11da58feca4a] <==
	I1122 00:20:12.203980       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:20:12.215498       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:20:12.215555       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:20:12.218724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:12.226119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:20:12.226503       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:20:12.226810       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"663f7033-f1d0-4a6a-a7b5-6ae68ff1b408", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-781232_34fa41f9-0564-4cf9-a793-d3e8600ab02c became leader
	I1122 00:20:12.226865       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-781232_34fa41f9-0564-4cf9-a793-d3e8600ab02c!
	W1122 00:20:12.232654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:12.239832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:20:12.327083       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-781232_34fa41f9-0564-4cf9-a793-d3e8600ab02c!
	W1122 00:20:14.243309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:14.248432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:16.251969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:16.256425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:18.260452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:18.266363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:20.270417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:20.274881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:22.278341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:22.283874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:24.291521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:24.300020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:26.303204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:26.308844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-781232 -n no-preload-781232
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-781232 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (13.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-491677 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7f94d7ba-76b7-4739-b7a9-81d27936e10f] Pending
helpers_test.go:352: "busybox" [7f94d7ba-76b7-4739-b7a9-81d27936e10f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7f94d7ba-76b7-4739-b7a9-81d27936e10f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004237775s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-491677 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-491677
helpers_test.go:243: (dbg) docker inspect embed-certs-491677:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78",
	        "Created": "2025-11-22T00:20:06.79977262Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 261687,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:20:06.837081251Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78/hosts",
	        "LogPath": "/var/lib/docker/containers/bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78/bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78-json.log",
	        "Name": "/embed-certs-491677",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-491677:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-491677",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78",
	                "LowerDir": "/var/lib/docker/overlay2/100dc12db05615eaf06dacd731e94d7443e9ef5d109aa3bd6714f1cc7c88f05c-init/diff:/var/lib/docker/overlay2/4b4af9a4e857911a6b5096aeeaee227ee7577c6eff3b08bbb4e765c49ed2fb70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/100dc12db05615eaf06dacd731e94d7443e9ef5d109aa3bd6714f1cc7c88f05c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/100dc12db05615eaf06dacd731e94d7443e9ef5d109aa3bd6714f1cc7c88f05c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/100dc12db05615eaf06dacd731e94d7443e9ef5d109aa3bd6714f1cc7c88f05c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-491677",
	                "Source": "/var/lib/docker/volumes/embed-certs-491677/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-491677",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-491677",
	                "name.minikube.sigs.k8s.io": "embed-certs-491677",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8a0e6de74db17b415e812d0739f1b3e2c5f7b9c165b269bc900dac10a1423d9b",
	            "SandboxKey": "/var/run/docker/netns/8a0e6de74db1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-491677": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "46fbcd0cae5a2d811f266bf4a0cbb02e2351cfcabdc238fccca0b8241b80909e",
	                    "EndpointID": "00c565579fa87c26656a136134b70f19215cff99bd9340a5c80f45cd5c120af9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "7e:b6:61:a8:ec:b7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-491677",
	                        "bf732b8e13b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-491677 -n embed-certs-491677
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-491677 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-491677 logs -n 25: (1.265271339s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-687868 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo containerd config dump                                                                                                                                                                                                        │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo crio config                                                                                                                                                                                                                   │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ delete  │ -p cilium-687868                                                                                                                                                                                                                                    │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p old-k8s-version-462319 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ ssh     │ -p NoKubernetes-714059 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ start   │ -p cert-expiration-427330 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-427330 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ delete  │ -p cert-expiration-427330                                                                                                                                                                                                                           │ cert-expiration-427330 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p no-preload-781232 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ stop    │ -p NoKubernetes-714059                                                                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p NoKubernetes-714059 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ ssh     │ -p NoKubernetes-714059 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ delete  │ -p NoKubernetes-714059                                                                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ start   │ -p embed-certs-491677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-491677     │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-462319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ stop    │ -p old-k8s-version-462319 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-781232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ stop    │ -p no-preload-781232 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-462319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ start   │ -p old-k8s-version-462319 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-781232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ start   │ -p no-preload-781232 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:20:40
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:20:40.886405  271651 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:20:40.886750  271651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:40.886763  271651 out.go:374] Setting ErrFile to fd 2...
	I1122 00:20:40.886771  271651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:40.887090  271651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:20:40.887734  271651 out.go:368] Setting JSON to false
	I1122 00:20:40.889530  271651 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3780,"bootTime":1763767061,"procs":390,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:20:40.889615  271651 start.go:143] virtualization: kvm guest
	I1122 00:20:40.891913  271651 out.go:179] * [no-preload-781232] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:20:40.893519  271651 notify.go:221] Checking for updates...
	I1122 00:20:40.893538  271651 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:20:40.895181  271651 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:20:40.896644  271651 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:20:40.898014  271651 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1122 00:20:40.899285  271651 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:20:40.900518  271651 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:20:40.902454  271651 config.go:182] Loaded profile config "no-preload-781232": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:40.903040  271651 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:20:40.929978  271651 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:20:40.930114  271651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:41.006356  271651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:20:40.993444426 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:41.006474  271651 docker.go:319] overlay module found
	I1122 00:20:41.009472  271651 out.go:179] * Using the docker driver based on existing profile
	I1122 00:20:41.010942  271651 start.go:309] selected driver: docker
	I1122 00:20:41.010966  271651 start.go:930] validating driver "docker" against &{Name:no-preload-781232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-781232 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:20:41.011087  271651 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:20:41.011879  271651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:41.104934  271651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:20:41.088985212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:41.105442  271651 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:41.105488  271651 cni.go:84] Creating CNI manager for ""
	I1122 00:20:41.105564  271651 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:20:41.105648  271651 start.go:353] cluster config:
	{Name:no-preload-781232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-781232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:20:41.112421  271651 out.go:179] * Starting "no-preload-781232" primary control-plane node in "no-preload-781232" cluster
	I1122 00:20:41.113936  271651 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:20:41.115178  271651 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:20:41.116381  271651 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:41.116485  271651 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:20:41.116551  271651 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/config.json ...
	I1122 00:20:41.116678  271651 cache.go:107] acquiring lock: {Name:mk3cbf993e64f2a4d1538596c5feef81911b9052 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.116792  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1122 00:20:41.116831  271651 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 170.313µs
	I1122 00:20:41.116835  271651 cache.go:107] acquiring lock: {Name:mkfebe1efa2de813c1c2eb3f37a54c832bf78fd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.116887  271651 cache.go:107] acquiring lock: {Name:mk81179b55eac91a1d7e3a877c3f0b2f7481bd05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.116835  271651 cache.go:107] acquiring lock: {Name:mkeac22ae63d56187c9ebc31aef7cb1b078e1fb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.118014  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1122 00:20:41.118032  271651 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 1.144897ms
	I1122 00:20:41.118044  271651 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1122 00:20:41.116861  271651 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1122 00:20:41.116762  271651 cache.go:107] acquiring lock: {Name:mk69f6487a5dd2c7727468b62e1b8af4d70135bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.116864  271651 cache.go:107] acquiring lock: {Name:mk11527980a4bb905a3cb94827e56e2e74bc7fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.118088  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1122 00:20:41.118087  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1122 00:20:41.118095  271651 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.233265ms
	I1122 00:20:41.116876  271651 cache.go:107] acquiring lock: {Name:mkca7af66c9bd0c8ceb77c9b6b55063268e48694 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.118116  271651 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1122 00:20:41.116921  271651 cache.go:107] acquiring lock: {Name:mk10fafdbb0634440e6c1d6dcf0e044001fbcbea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.118119  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1122 00:20:41.118129  271651 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 1.302167ms
	I1122 00:20:41.118139  271651 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1122 00:20:41.118103  271651 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.427788ms
	I1122 00:20:41.118159  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1122 00:20:41.118174  271651 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1122 00:20:41.118149  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1122 00:20:41.118187  271651 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.267947ms
	I1122 00:20:41.118187  271651 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 1.296406ms
	I1122 00:20:41.116979  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1122 00:20:41.118201  271651 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1122 00:20:41.118196  271651 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1122 00:20:41.118204  271651 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.373517ms
	I1122 00:20:41.118213  271651 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1122 00:20:41.118222  271651 cache.go:87] Successfully saved all images to host disk.
	I1122 00:20:41.145996  271651 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:20:41.146026  271651 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:20:41.146042  271651 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:20:41.146078  271651 start.go:360] acquireMachinesLock for no-preload-781232: {Name:mkbfbcb44f7f9e1c764fa85467f8afec16e3b56f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.146137  271651 start.go:364] duration metric: took 40.701µs to acquireMachinesLock for "no-preload-781232"
	I1122 00:20:41.146156  271651 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:20:41.146163  271651 fix.go:54] fixHost starting: 
	I1122 00:20:41.146490  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:41.172390  271651 fix.go:112] recreateIfNeeded on no-preload-781232: state=Stopped err=<nil>
	W1122 00:20:41.172431  271651 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:20:38.063632  269458 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1122 00:20:38.063720  269458 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1122 00:20:38.063790  269458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-462319
	I1122 00:20:38.067657  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:20:38.067693  269458 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:20:38.067761  269458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-462319
	I1122 00:20:38.094915  269458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/old-k8s-version-462319/id_rsa Username:docker}
	I1122 00:20:38.096830  269458 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:20:38.096852  269458 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:20:38.096909  269458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-462319
	I1122 00:20:38.103367  269458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/old-k8s-version-462319/id_rsa Username:docker}
	I1122 00:20:38.108420  269458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/old-k8s-version-462319/id_rsa Username:docker}
	I1122 00:20:38.125718  269458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/old-k8s-version-462319/id_rsa Username:docker}
	I1122 00:20:38.206740  269458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:20:38.218490  269458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:20:38.224473  269458 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-462319" to be "Ready" ...
	I1122 00:20:38.234022  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:20:38.234050  269458 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:20:38.235622  269458 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1122 00:20:38.235645  269458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1122 00:20:38.244718  269458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:20:38.254596  269458 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1122 00:20:38.254626  269458 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1122 00:20:38.255591  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:20:38.255618  269458 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:20:38.272579  269458 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1122 00:20:38.272618  269458 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1122 00:20:38.276352  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:20:38.276375  269458 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:20:38.294975  269458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1122 00:20:38.300369  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:20:38.300394  269458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:20:38.323115  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:20:38.323154  269458 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:20:38.346754  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:20:38.346784  269458 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:20:38.365277  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:20:38.365303  269458 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:20:38.385806  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:20:38.385833  269458 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:20:38.404309  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:20:38.404343  269458 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:20:38.417900  269458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:20:40.488636  269458 node_ready.go:49] node "old-k8s-version-462319" is "Ready"
	I1122 00:20:40.488671  269458 node_ready.go:38] duration metric: took 2.264006021s for node "old-k8s-version-462319" to be "Ready" ...
	I1122 00:20:40.488689  269458 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:40.488748  269458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1122 00:20:37.882714  260527 node_ready.go:57] node "embed-certs-491677" has "Ready":"False" status (will retry)
	I1122 00:20:40.382005  260527 node_ready.go:49] node "embed-certs-491677" is "Ready"
	I1122 00:20:40.382048  260527 node_ready.go:38] duration metric: took 11.503562001s for node "embed-certs-491677" to be "Ready" ...
	I1122 00:20:40.382069  260527 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:40.382127  260527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:40.401330  260527 api_server.go:72] duration metric: took 11.90660159s to wait for apiserver process to appear ...
	I1122 00:20:40.401364  260527 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:40.401388  260527 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:20:40.406625  260527 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:20:40.408004  260527 api_server.go:141] control plane version: v1.34.1
	I1122 00:20:40.408039  260527 api_server.go:131] duration metric: took 6.665705ms to wait for apiserver health ...
	I1122 00:20:40.408050  260527 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:20:40.412838  260527 system_pods.go:59] 8 kube-system pods found
	I1122 00:20:40.412972  260527 system_pods.go:61] "coredns-66bc5c9577-k2k88" [5170ce81-2d67-4775-9d3e-7ba7d5b37f03] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:40.412999  260527 system_pods.go:61] "etcd-embed-certs-491677" [c1a339bc-3e3b-4a29-b843-3bddd31ce5d7] Running
	I1122 00:20:40.413012  260527 system_pods.go:61] "kindnet-hv86p" [6231b935-f44b-4e7b-a240-287c22f9547b] Running
	I1122 00:20:40.413033  260527 system_pods.go:61] "kube-apiserver-embed-certs-491677" [b0fe5ce2-fabe-4f5f-87d9-a8775ed9324e] Running
	I1122 00:20:40.413043  260527 system_pods.go:61] "kube-controller-manager-embed-certs-491677" [bbc77c2e-0f6d-4ffa-9d92-8d82a0a96146] Running
	I1122 00:20:40.413049  260527 system_pods.go:61] "kube-proxy-k9lgv" [aa71cc32-b446-45a5-b379-0bb74ac111be] Running
	I1122 00:20:40.413060  260527 system_pods.go:61] "kube-scheduler-embed-certs-491677" [ae065e3d-a671-48ea-8c1e-aa1d1cb0eb3e] Running
	I1122 00:20:40.413076  260527 system_pods.go:61] "storage-provisioner" [957a225b-f96e-47aa-aea3-a77ff5b7843c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:40.413093  260527 system_pods.go:74] duration metric: took 5.035521ms to wait for pod list to return data ...
	I1122 00:20:40.413108  260527 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:20:40.415623  260527 default_sa.go:45] found service account: "default"
	I1122 00:20:40.415650  260527 default_sa.go:55] duration metric: took 2.533838ms for default service account to be created ...
	I1122 00:20:40.415662  260527 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:20:40.419678  260527 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:40.419720  260527 system_pods.go:89] "coredns-66bc5c9577-k2k88" [5170ce81-2d67-4775-9d3e-7ba7d5b37f03] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:40.419728  260527 system_pods.go:89] "etcd-embed-certs-491677" [c1a339bc-3e3b-4a29-b843-3bddd31ce5d7] Running
	I1122 00:20:40.419744  260527 system_pods.go:89] "kindnet-hv86p" [6231b935-f44b-4e7b-a240-287c22f9547b] Running
	I1122 00:20:40.419749  260527 system_pods.go:89] "kube-apiserver-embed-certs-491677" [b0fe5ce2-fabe-4f5f-87d9-a8775ed9324e] Running
	I1122 00:20:40.419754  260527 system_pods.go:89] "kube-controller-manager-embed-certs-491677" [bbc77c2e-0f6d-4ffa-9d92-8d82a0a96146] Running
	I1122 00:20:40.419759  260527 system_pods.go:89] "kube-proxy-k9lgv" [aa71cc32-b446-45a5-b379-0bb74ac111be] Running
	I1122 00:20:40.419765  260527 system_pods.go:89] "kube-scheduler-embed-certs-491677" [ae065e3d-a671-48ea-8c1e-aa1d1cb0eb3e] Running
	I1122 00:20:40.419771  260527 system_pods.go:89] "storage-provisioner" [957a225b-f96e-47aa-aea3-a77ff5b7843c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:40.419798  260527 retry.go:31] will retry after 270.438071ms: missing components: kube-dns
	I1122 00:20:40.695698  260527 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:40.695763  260527 system_pods.go:89] "coredns-66bc5c9577-k2k88" [5170ce81-2d67-4775-9d3e-7ba7d5b37f03] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:40.695773  260527 system_pods.go:89] "etcd-embed-certs-491677" [c1a339bc-3e3b-4a29-b843-3bddd31ce5d7] Running
	I1122 00:20:40.695783  260527 system_pods.go:89] "kindnet-hv86p" [6231b935-f44b-4e7b-a240-287c22f9547b] Running
	I1122 00:20:40.695793  260527 system_pods.go:89] "kube-apiserver-embed-certs-491677" [b0fe5ce2-fabe-4f5f-87d9-a8775ed9324e] Running
	I1122 00:20:40.695799  260527 system_pods.go:89] "kube-controller-manager-embed-certs-491677" [bbc77c2e-0f6d-4ffa-9d92-8d82a0a96146] Running
	I1122 00:20:40.695807  260527 system_pods.go:89] "kube-proxy-k9lgv" [aa71cc32-b446-45a5-b379-0bb74ac111be] Running
	I1122 00:20:40.695812  260527 system_pods.go:89] "kube-scheduler-embed-certs-491677" [ae065e3d-a671-48ea-8c1e-aa1d1cb0eb3e] Running
	I1122 00:20:40.695821  260527 system_pods.go:89] "storage-provisioner" [957a225b-f96e-47aa-aea3-a77ff5b7843c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:40.695838  260527 retry.go:31] will retry after 368.508675ms: missing components: kube-dns
	I1122 00:20:41.074998  260527 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:41.075042  260527 system_pods.go:89] "coredns-66bc5c9577-k2k88" [5170ce81-2d67-4775-9d3e-7ba7d5b37f03] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:41.075190  260527 system_pods.go:89] "etcd-embed-certs-491677" [c1a339bc-3e3b-4a29-b843-3bddd31ce5d7] Running
	I1122 00:20:41.075204  260527 system_pods.go:89] "kindnet-hv86p" [6231b935-f44b-4e7b-a240-287c22f9547b] Running
	I1122 00:20:41.075288  260527 system_pods.go:89] "kube-apiserver-embed-certs-491677" [b0fe5ce2-fabe-4f5f-87d9-a8775ed9324e] Running
	I1122 00:20:41.075298  260527 system_pods.go:89] "kube-controller-manager-embed-certs-491677" [bbc77c2e-0f6d-4ffa-9d92-8d82a0a96146] Running
	I1122 00:20:41.075303  260527 system_pods.go:89] "kube-proxy-k9lgv" [aa71cc32-b446-45a5-b379-0bb74ac111be] Running
	I1122 00:20:41.075317  260527 system_pods.go:89] "kube-scheduler-embed-certs-491677" [ae065e3d-a671-48ea-8c1e-aa1d1cb0eb3e] Running
	I1122 00:20:41.075325  260527 system_pods.go:89] "storage-provisioner" [957a225b-f96e-47aa-aea3-a77ff5b7843c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:41.075473  260527 retry.go:31] will retry after 369.288531ms: missing components: kube-dns
	I1122 00:20:41.454047  260527 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:41.454089  260527 system_pods.go:89] "coredns-66bc5c9577-k2k88" [5170ce81-2d67-4775-9d3e-7ba7d5b37f03] Running
	I1122 00:20:41.454098  260527 system_pods.go:89] "etcd-embed-certs-491677" [c1a339bc-3e3b-4a29-b843-3bddd31ce5d7] Running
	I1122 00:20:41.454104  260527 system_pods.go:89] "kindnet-hv86p" [6231b935-f44b-4e7b-a240-287c22f9547b] Running
	I1122 00:20:41.454110  260527 system_pods.go:89] "kube-apiserver-embed-certs-491677" [b0fe5ce2-fabe-4f5f-87d9-a8775ed9324e] Running
	I1122 00:20:41.454121  260527 system_pods.go:89] "kube-controller-manager-embed-certs-491677" [bbc77c2e-0f6d-4ffa-9d92-8d82a0a96146] Running
	I1122 00:20:41.454127  260527 system_pods.go:89] "kube-proxy-k9lgv" [aa71cc32-b446-45a5-b379-0bb74ac111be] Running
	I1122 00:20:41.454131  260527 system_pods.go:89] "kube-scheduler-embed-certs-491677" [ae065e3d-a671-48ea-8c1e-aa1d1cb0eb3e] Running
	I1122 00:20:41.454136  260527 system_pods.go:89] "storage-provisioner" [957a225b-f96e-47aa-aea3-a77ff5b7843c] Running
	I1122 00:20:41.454148  260527 system_pods.go:126] duration metric: took 1.038478177s to wait for k8s-apps to be running ...
	I1122 00:20:41.454162  260527 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:20:41.454609  260527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:41.479978  260527 system_svc.go:56] duration metric: took 25.803347ms WaitForService to wait for kubelet
	I1122 00:20:41.480012  260527 kubeadm.go:587] duration metric: took 12.985287639s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:41.480169  260527 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:20:41.483525  260527 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:20:41.483555  260527 node_conditions.go:123] node cpu capacity is 8
	I1122 00:20:41.483573  260527 node_conditions.go:105] duration metric: took 3.388568ms to run NodePressure ...
	I1122 00:20:41.483589  260527 start.go:242] waiting for startup goroutines ...
	I1122 00:20:41.483598  260527 start.go:247] waiting for cluster config update ...
	I1122 00:20:41.483611  260527 start.go:256] writing updated cluster config ...
	I1122 00:20:41.484070  260527 ssh_runner.go:195] Run: rm -f paused
	I1122 00:20:41.490981  260527 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:41.422839  269458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.178083885s)
	I1122 00:20:41.425002  269458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.206471244s)
	I1122 00:20:41.505527  269458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.210507816s)
	I1122 00:20:41.505568  269458 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-462319"
	I1122 00:20:42.075153  269458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.65720666s)
	I1122 00:20:42.075599  269458 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.586814064s)
	I1122 00:20:42.075634  269458 api_server.go:72] duration metric: took 4.051404168s to wait for apiserver process to appear ...
	I1122 00:20:42.075641  269458 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:42.075680  269458 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:20:42.080775  269458 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-462319 addons enable metrics-server
	
	I1122 00:20:42.082895  269458 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1122 00:20:42.085028  269458 api_server.go:141] control plane version: v1.28.0
	I1122 00:20:42.085061  269458 api_server.go:131] duration metric: took 9.41083ms to wait for apiserver health ...
	I1122 00:20:42.085072  269458 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:20:42.085122  269458 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1122 00:20:40.938726  218693 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062373977s)
	W1122 00:20:40.938788  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1122 00:20:40.938803  218693 logs.go:123] Gathering logs for kube-apiserver [81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6] ...
	I1122 00:20:40.938819  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6"
	I1122 00:20:40.987385  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:40.987423  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:41.040162  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:41.040209  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:41.104994  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:41.105033  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:41.151516  218693 logs.go:123] Gathering logs for kube-controller-manager [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f] ...
	I1122 00:20:41.151548  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:41.190029  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:41.190069  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:41.257775  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:41.257820  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:41.419761  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:41.419803  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:41.509160  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:41.509201  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:41.551859  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:41.551894  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:41.599128  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:41.599167  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:41.552193  260527 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k2k88" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.559444  260527 pod_ready.go:94] pod "coredns-66bc5c9577-k2k88" is "Ready"
	I1122 00:20:41.559475  260527 pod_ready.go:86] duration metric: took 7.254295ms for pod "coredns-66bc5c9577-k2k88" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.563163  260527 pod_ready.go:83] waiting for pod "etcd-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.569374  260527 pod_ready.go:94] pod "etcd-embed-certs-491677" is "Ready"
	I1122 00:20:41.569405  260527 pod_ready.go:86] duration metric: took 6.207246ms for pod "etcd-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.572654  260527 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.578166  260527 pod_ready.go:94] pod "kube-apiserver-embed-certs-491677" is "Ready"
	I1122 00:20:41.578197  260527 pod_ready.go:86] duration metric: took 5.508968ms for pod "kube-apiserver-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.581493  260527 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.897425  260527 pod_ready.go:94] pod "kube-controller-manager-embed-certs-491677" is "Ready"
	I1122 00:20:41.897549  260527 pod_ready.go:86] duration metric: took 316.026753ms for pod "kube-controller-manager-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:42.095784  260527 pod_ready.go:83] waiting for pod "kube-proxy-k9lgv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:42.496147  260527 pod_ready.go:94] pod "kube-proxy-k9lgv" is "Ready"
	I1122 00:20:42.496186  260527 pod_ready.go:86] duration metric: took 400.373365ms for pod "kube-proxy-k9lgv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:42.697075  260527 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:43.096481  260527 pod_ready.go:94] pod "kube-scheduler-embed-certs-491677" is "Ready"
	I1122 00:20:43.096511  260527 pod_ready.go:86] duration metric: took 399.407479ms for pod "kube-scheduler-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:43.096527  260527 pod_ready.go:40] duration metric: took 1.60549947s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:43.142523  260527 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:20:43.144413  260527 out.go:179] * Done! kubectl is now configured to use "embed-certs-491677" cluster and "default" namespace by default
	I1122 00:20:41.174971  271651 out.go:252] * Restarting existing docker container for "no-preload-781232" ...
	I1122 00:20:41.175094  271651 cli_runner.go:164] Run: docker start no-preload-781232
	I1122 00:20:41.602742  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:41.631107  271651 kic.go:430] container "no-preload-781232" state is running.
	I1122 00:20:41.632580  271651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-781232
	I1122 00:20:41.666003  271651 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/config.json ...
	I1122 00:20:41.666250  271651 machine.go:94] provisionDockerMachine start ...
	I1122 00:20:41.666331  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:41.694892  271651 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:41.695305  271651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1122 00:20:41.695325  271651 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:20:41.696539  271651 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53028->127.0.0.1:33083: read: connection reset by peer
	I1122 00:20:44.822620  271651 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-781232
	
	I1122 00:20:44.822651  271651 ubuntu.go:182] provisioning hostname "no-preload-781232"
	I1122 00:20:44.822782  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:44.841668  271651 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:44.841894  271651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1122 00:20:44.841913  271651 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-781232 && echo "no-preload-781232" | sudo tee /etc/hostname
	I1122 00:20:44.978746  271651 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-781232
	
	I1122 00:20:44.978833  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:44.999248  271651 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:44.999578  271651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1122 00:20:44.999605  271651 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-781232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-781232/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-781232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:20:45.126381  271651 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:20:45.126418  271651 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9059/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9059/.minikube}
	I1122 00:20:45.126494  271651 ubuntu.go:190] setting up certificates
	I1122 00:20:45.126507  271651 provision.go:84] configureAuth start
	I1122 00:20:45.126584  271651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-781232
	I1122 00:20:45.147608  271651 provision.go:143] copyHostCerts
	I1122 00:20:45.147684  271651 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem, removing ...
	I1122 00:20:45.147704  271651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem
	I1122 00:20:45.147772  271651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem (1082 bytes)
	I1122 00:20:45.147914  271651 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem, removing ...
	I1122 00:20:45.147932  271651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem
	I1122 00:20:45.147966  271651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem (1123 bytes)
	I1122 00:20:45.148050  271651 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem, removing ...
	I1122 00:20:45.148064  271651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem
	I1122 00:20:45.148090  271651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem (1679 bytes)
	I1122 00:20:45.148174  271651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem org=jenkins.no-preload-781232 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-781232]
	I1122 00:20:45.216841  271651 provision.go:177] copyRemoteCerts
	I1122 00:20:45.216897  271651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:20:45.216931  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:45.236085  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:45.330063  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1122 00:20:45.349841  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:20:45.369530  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:20:45.389053  271651 provision.go:87] duration metric: took 262.532523ms to configureAuth
	I1122 00:20:45.389081  271651 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:20:45.389285  271651 config.go:182] Loaded profile config "no-preload-781232": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:45.389299  271651 machine.go:97] duration metric: took 3.723023836s to provisionDockerMachine
	I1122 00:20:45.389308  271651 start.go:293] postStartSetup for "no-preload-781232" (driver="docker")
	I1122 00:20:45.389316  271651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:20:45.389386  271651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:20:45.389430  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:45.409210  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:45.502549  271651 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:20:45.506540  271651 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:20:45.506564  271651 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:20:45.506575  271651 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/addons for local assets ...
	I1122 00:20:45.506651  271651 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/files for local assets ...
	I1122 00:20:45.506725  271651 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem -> 145302.pem in /etc/ssl/certs
	I1122 00:20:45.506816  271651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:20:45.515318  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:20:45.533908  271651 start.go:296] duration metric: took 144.585465ms for postStartSetup
	I1122 00:20:45.534028  271651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:20:45.534124  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:45.554579  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:45.644942  271651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:20:45.649876  271651 fix.go:56] duration metric: took 4.503705231s for fixHost
	I1122 00:20:45.649904  271651 start.go:83] releasing machines lock for "no-preload-781232", held for 4.503755569s
	I1122 00:20:45.650004  271651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-781232
	I1122 00:20:45.670195  271651 ssh_runner.go:195] Run: cat /version.json
	I1122 00:20:45.670306  271651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:20:45.670312  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:45.670356  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:45.690464  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:45.691547  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:45.781325  271651 ssh_runner.go:195] Run: systemctl --version
	I1122 00:20:45.847568  271651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:20:45.852509  271651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:20:45.852580  271651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:20:45.861868  271651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:20:45.861894  271651 start.go:496] detecting cgroup driver to use...
	I1122 00:20:45.861942  271651 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:20:45.862001  271651 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:20:45.883582  271651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:20:42.086410  269458 addons.go:530] duration metric: took 4.062719509s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1122 00:20:42.090401  269458 system_pods.go:59] 9 kube-system pods found
	I1122 00:20:42.090448  269458 system_pods.go:61] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:42.090462  269458 system_pods.go:61] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:20:42.090489  269458 system_pods.go:61] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:20:42.090499  269458 system_pods.go:61] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:20:42.090512  269458 system_pods.go:61] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:20:42.090521  269458 system_pods.go:61] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:20:42.090533  269458 system_pods.go:61] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:20:42.090542  269458 system_pods.go:61] "metrics-server-57f55c9bc5-m2z8b" [d6d9bc49-d78b-4c7d-9bda-04e70f660290] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1122 00:20:42.090549  269458 system_pods.go:61] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:42.090558  269458 system_pods.go:74] duration metric: took 5.478417ms to wait for pod list to return data ...
	I1122 00:20:42.090570  269458 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:20:42.092794  269458 default_sa.go:45] found service account: "default"
	I1122 00:20:42.092813  269458 default_sa.go:55] duration metric: took 2.232935ms for default service account to be created ...
	I1122 00:20:42.092821  269458 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:20:42.096929  269458 system_pods.go:86] 9 kube-system pods found
	I1122 00:20:42.096960  269458 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:42.096971  269458 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:20:42.096982  269458 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:20:42.096999  269458 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:20:42.097013  269458 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:20:42.097020  269458 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:20:42.097025  269458 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:20:42.097030  269458 system_pods.go:89] "metrics-server-57f55c9bc5-m2z8b" [d6d9bc49-d78b-4c7d-9bda-04e70f660290] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1122 00:20:42.097039  269458 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:42.097046  269458 system_pods.go:126] duration metric: took 4.219175ms to wait for k8s-apps to be running ...
	I1122 00:20:42.097053  269458 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:20:42.097104  269458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:42.112045  269458 system_svc.go:56] duration metric: took 14.981035ms WaitForService to wait for kubelet
	I1122 00:20:42.112078  269458 kubeadm.go:587] duration metric: took 4.087849153s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:42.112098  269458 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:20:42.114860  269458 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:20:42.114884  269458 node_conditions.go:123] node cpu capacity is 8
	I1122 00:20:42.114898  269458 node_conditions.go:105] duration metric: took 2.795002ms to run NodePressure ...
	I1122 00:20:42.114914  269458 start.go:242] waiting for startup goroutines ...
	I1122 00:20:42.114925  269458 start.go:247] waiting for cluster config update ...
	I1122 00:20:42.114938  269458 start.go:256] writing updated cluster config ...
	I1122 00:20:42.115180  269458 ssh_runner.go:195] Run: rm -f paused
	I1122 00:20:42.119502  269458 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:42.124404  269458 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pqbfp" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:20:44.130807  269458 pod_ready.go:104] pod "coredns-5dd5756b68-pqbfp" is not "Ready", error: <nil>
	W1122 00:20:46.131721  269458 pod_ready.go:104] pod "coredns-5dd5756b68-pqbfp" is not "Ready", error: <nil>
	I1122 00:20:45.898007  271651 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:20:45.898082  271651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:20:45.916642  271651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:20:45.932424  271651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:20:46.029899  271651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:20:46.129762  271651 docker.go:234] disabling docker service ...
	I1122 00:20:46.129828  271651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:20:46.147182  271651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:20:46.162484  271651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:20:46.254302  271651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:20:46.343398  271651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:20:46.357762  271651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:20:46.374877  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:20:46.384787  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:20:46.394901  271651 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1122 00:20:46.394976  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1122 00:20:46.405203  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:20:46.416542  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:20:46.426018  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:20:46.436068  271651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:20:46.446031  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:20:46.456283  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:20:46.466098  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:20:46.475564  271651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:20:46.483749  271651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:20:46.492209  271651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:20:46.580614  271651 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:20:46.690901  271651 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:20:46.691011  271651 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:20:46.695597  271651 start.go:564] Will wait 60s for crictl version
	I1122 00:20:46.695674  271651 ssh_runner.go:195] Run: which crictl
	I1122 00:20:46.700386  271651 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:20:46.730033  271651 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:20:46.730103  271651 ssh_runner.go:195] Run: containerd --version
	I1122 00:20:46.753517  271651 ssh_runner.go:195] Run: containerd --version
	I1122 00:20:46.779524  271651 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1122 00:20:44.162068  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:45.910683  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:46722->192.168.76.2:8443: read: connection reset by peer
	I1122 00:20:45.910762  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:45.910821  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:45.942961  218693 cri.go:89] found id: "81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6"
	I1122 00:20:45.943053  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:45.943063  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:45.943084  218693 cri.go:89] found id: ""
	I1122 00:20:45.943095  218693 logs.go:282] 3 containers: [81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:45.943203  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:45.948859  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:45.952999  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:45.957041  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:45.957122  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:45.988998  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:45.989018  218693 cri.go:89] found id: ""
	I1122 00:20:45.989026  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:45.989073  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:45.993507  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:45.993569  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:46.021440  218693 cri.go:89] found id: ""
	I1122 00:20:46.021465  218693 logs.go:282] 0 containers: []
	W1122 00:20:46.021477  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:46.021485  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:46.021548  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:46.051857  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:46.051885  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:46.051889  218693 cri.go:89] found id: ""
	I1122 00:20:46.051921  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:46.051968  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:46.056981  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:46.061726  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:46.061802  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:46.096128  218693 cri.go:89] found id: ""
	I1122 00:20:46.096172  218693 logs.go:282] 0 containers: []
	W1122 00:20:46.096184  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:46.096194  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:46.096271  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:46.123687  218693 cri.go:89] found id: "718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:46.123714  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:46.123720  218693 cri.go:89] found id: ""
	I1122 00:20:46.123729  218693 logs.go:282] 2 containers: [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:46.123790  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:46.128818  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:46.133506  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:46.133581  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:46.162071  218693 cri.go:89] found id: ""
	I1122 00:20:46.162099  218693 logs.go:282] 0 containers: []
	W1122 00:20:46.162107  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:46.162119  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:46.162178  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:46.197735  218693 cri.go:89] found id: ""
	I1122 00:20:46.197772  218693 logs.go:282] 0 containers: []
	W1122 00:20:46.197787  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:46.197800  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:46.197816  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:46.256663  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:46.256690  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:46.301782  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:46.301819  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:46.335279  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:46.335311  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:46.388372  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:46.388402  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:46.486723  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:46.486756  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:46.500905  218693 logs.go:123] Gathering logs for kube-apiserver [81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6] ...
	I1122 00:20:46.500936  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6"
	I1122 00:20:46.540691  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:46.540721  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:46.575433  218693 logs.go:123] Gathering logs for kube-controller-manager [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f] ...
	I1122 00:20:46.575465  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:46.606747  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:46.606776  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:46.641596  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:46.641630  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:46.714363  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:46.714390  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:46.714405  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:46.751379  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:46.751411  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:46.780860  271651 cli_runner.go:164] Run: docker network inspect no-preload-781232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:20:46.800096  271651 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1122 00:20:46.804435  271651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:20:46.815135  271651 kubeadm.go:884] updating cluster {Name:no-preload-781232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-781232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:20:46.815300  271651 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:46.815354  271651 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:20:46.841055  271651 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:20:46.841078  271651 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:20:46.841085  271651 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1122 00:20:46.841185  271651 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-781232 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-781232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:20:46.841246  271651 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:20:46.869512  271651 cni.go:84] Creating CNI manager for ""
	I1122 00:20:46.869537  271651 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:20:46.869558  271651 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:20:46.869579  271651 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-781232 NodeName:no-preload-781232 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:20:46.869707  271651 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-781232"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:20:46.869766  271651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:20:46.879172  271651 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:20:46.879246  271651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:20:46.888577  271651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1122 00:20:46.901776  271651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:20:46.916546  271651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1122 00:20:46.929837  271651 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:20:46.933840  271651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:20:46.944382  271651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:20:47.027162  271651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:20:47.053782  271651 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232 for IP: 192.168.94.2
	I1122 00:20:47.053805  271651 certs.go:195] generating shared ca certs ...
	I1122 00:20:47.053826  271651 certs.go:227] acquiring lock for ca certs: {Name:mkcee17f48cab2703d4de8a78a6fb8af44d9e7e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:47.054017  271651 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.key
	I1122 00:20:47.054073  271651 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.key
	I1122 00:20:47.054095  271651 certs.go:257] generating profile certs ...
	I1122 00:20:47.054221  271651 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/client.key
	I1122 00:20:47.054337  271651 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/apiserver.key.80216c10
	I1122 00:20:47.054412  271651 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/proxy-client.key
	I1122 00:20:47.054552  271651 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530.pem (1338 bytes)
	W1122 00:20:47.054609  271651 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530_empty.pem, impossibly tiny 0 bytes
	I1122 00:20:47.054623  271651 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem (1675 bytes)
	I1122 00:20:47.054660  271651 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem (1082 bytes)
	I1122 00:20:47.054695  271651 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:20:47.054737  271651 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem (1679 bytes)
	I1122 00:20:47.054803  271651 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:20:47.056310  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:20:47.077024  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:20:47.097417  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:20:47.118489  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1122 00:20:47.143382  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1122 00:20:47.167131  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1122 00:20:47.187237  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:20:47.206197  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:20:47.224793  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530.pem --> /usr/share/ca-certificates/14530.pem (1338 bytes)
	I1122 00:20:47.243726  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /usr/share/ca-certificates/145302.pem (1708 bytes)
	I1122 00:20:47.263970  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:20:47.284711  271651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:20:47.297960  271651 ssh_runner.go:195] Run: openssl version
	I1122 00:20:47.305462  271651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145302.pem && ln -fs /usr/share/ca-certificates/145302.pem /etc/ssl/certs/145302.pem"
	I1122 00:20:47.315837  271651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145302.pem
	I1122 00:20:47.321286  271651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145302.pem
	I1122 00:20:47.321360  271651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145302.pem
	I1122 00:20:47.359997  271651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145302.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:20:47.369513  271651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:20:47.378451  271651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:20:47.382473  271651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:20:47.382531  271651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:20:47.418380  271651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:20:47.427427  271651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14530.pem && ln -fs /usr/share/ca-certificates/14530.pem /etc/ssl/certs/14530.pem"
	I1122 00:20:47.436726  271651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14530.pem
	I1122 00:20:47.440941  271651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14530.pem
	I1122 00:20:47.441009  271651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14530.pem
	I1122 00:20:47.476237  271651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14530.pem /etc/ssl/certs/51391683.0"
	I1122 00:20:47.485111  271651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:20:47.489344  271651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:20:47.525023  271651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:20:47.560451  271651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:20:47.600419  271651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:20:47.659553  271651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:20:47.709346  271651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:20:47.754761  271651 kubeadm.go:401] StartCluster: {Name:no-preload-781232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-781232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:20:47.754852  271651 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:20:47.754919  271651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:20:47.799187  271651 cri.go:89] found id: "e5449caa295e69e90453980f9e9bb8cca5858a385c302dd4f9a74d2514f50118"
	I1122 00:20:47.799210  271651 cri.go:89] found id: "dd1c7227c8c4e2bf1ae891b410d84e72536ab5c3bf5218fa25044e4b2849b261"
	I1122 00:20:47.799223  271651 cri.go:89] found id: "35b440e09c5a86db6c6dce68737e0b37e0d9302f51ce5fd729ac86a23bae6714"
	I1122 00:20:47.799228  271651 cri.go:89] found id: "a088ba754b8b52d8b3ef3947967041e2570a6e8d27ff5de86ee8d26a638e3aa9"
	I1122 00:20:47.799232  271651 cri.go:89] found id: "b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be"
	I1122 00:20:47.799237  271651 cri.go:89] found id: "a8df28ee53bb60379874726c9a896717f75e12fd13a7316e60ad11da58feca4a"
	I1122 00:20:47.799241  271651 cri.go:89] found id: "304e6535bf7bedf2a516b8d232b19d3e038abaca4c8c450355eade98b387f580"
	I1122 00:20:47.799246  271651 cri.go:89] found id: "2b0f0e4e1df6d003c1fd5d63a2d88caf527a5828be1e719b714f70bf70e013e6"
	I1122 00:20:47.799250  271651 cri.go:89] found id: "13c5477f80d07937f3038c381810143f379c1a5724ad58b9f212e7d95e199ef6"
	I1122 00:20:47.799274  271651 cri.go:89] found id: "6b02e9e9a07928c42cf1e5bb58d45de4ce420454640d91b3f098f98aa2f59ca6"
	I1122 00:20:47.799280  271651 cri.go:89] found id: "7f1227117afb11933863eec6c929a38cd5f7c89c181f267ac92151e7d68ac0bb"
	I1122 00:20:47.799284  271651 cri.go:89] found id: "190bb0852270abcf17fda286c6be5e9fcb36eb2b98dcf07cf71fa2985c5db26b"
	I1122 00:20:47.799289  271651 cri.go:89] found id: ""
	I1122 00:20:47.799343  271651 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1122 00:20:47.828549  271651 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437","pid":858,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437/rootfs","created":"2025-11-22T00:20:47.664964359Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-781232_311cfc4ebe5dbfb8c158af5da75e855b","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"311cfc4ebe5dbfb8c158af5da75e855b"},"owner":"root"},{"ociVersion":"1.2.1","id":"35b440e09c5a86db6c6dce68737e0b37e0d9302f51ce5fd729ac86a23bae6714","pid":968,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b440e09c5a86db6c6dce68737e0b37e0d9302f51ce5fd729ac86a23bae6714","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b440e09c5a86db6c6dce68737e0b37e0d9302f51ce5fd729ac86a23bae6714/rootfs","created":"2025-11-22T00:20:47.804770594Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-781232","io.kubernetes.cri.sandbox-nam
espace":"kube-system","io.kubernetes.cri.sandbox-uid":"b6660e44a79de4c519af19191b40ac51"},"owner":"root"},{"ociVersion":"1.2.1","id":"6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2","pid":829,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2/rootfs","created":"2025-11-22T00:20:47.655708991Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-781232_b6660e44a79de4c519a
f19191b40ac51","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b6660e44a79de4c519af19191b40ac51"},"owner":"root"},{"ociVersion":"1.2.1","id":"6ed8ae255eb270ce384b53a2cfa8af556d87314b9ef910c4ddf73b5057ba4cae","pid":865,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ed8ae255eb270ce384b53a2cfa8af556d87314b9ef910c4ddf73b5057ba4cae","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ed8ae255eb270ce384b53a2cfa8af556d87314b9ef910c4ddf73b5057ba4cae/rootfs","created":"2025-11-22T00:20:47.669557881Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"6ed8ae255eb270ce384b53a2cfa8af556d8
7314b9ef910c4ddf73b5057ba4cae","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-781232_0ea3925d850410c51c93e1eebc56436e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0ea3925d850410c51c93e1eebc56436e"},"owner":"root"},{"ociVersion":"1.2.1","id":"a088ba754b8b52d8b3ef3947967041e2570a6e8d27ff5de86ee8d26a638e3aa9","pid":932,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a088ba754b8b52d8b3ef3947967041e2570a6e8d27ff5de86ee8d26a638e3aa9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a088ba754b8b52d8b3ef3947967041e2570a6e8d27ff5de86ee8d26a638e3aa9/rootfs","created":"2025-11-22T00:20:47.790822837Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0"
,"io.kubernetes.cri.sandbox-id":"f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5","io.kubernetes.cri.sandbox-name":"etcd-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2bde7d118300deb354bbf504cfa1dd64"},"owner":"root"},{"ociVersion":"1.2.1","id":"dd1c7227c8c4e2bf1ae891b410d84e72536ab5c3bf5218fa25044e4b2849b261","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd1c7227c8c4e2bf1ae891b410d84e72536ab5c3bf5218fa25044e4b2849b261","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd1c7227c8c4e2bf1ae891b410d84e72536ab5c3bf5218fa25044e4b2849b261/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437","io.kubernetes.cri.sandbox-n
ame":"kube-apiserver-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"311cfc4ebe5dbfb8c158af5da75e855b"},"owner":"root"},{"ociVersion":"1.2.1","id":"e5449caa295e69e90453980f9e9bb8cca5858a385c302dd4f9a74d2514f50118","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5449caa295e69e90453980f9e9bb8cca5858a385c302dd4f9a74d2514f50118","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5449caa295e69e90453980f9e9bb8cca5858a385c302dd4f9a74d2514f50118/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"6ed8ae255eb270ce384b53a2cfa8af556d87314b9ef910c4ddf73b5057ba4cae","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system
","io.kubernetes.cri.sandbox-uid":"0ea3925d850410c51c93e1eebc56436e"},"owner":"root"},{"ociVersion":"1.2.1","id":"f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5","pid":849,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5/rootfs","created":"2025-11-22T00:20:47.658838122Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-781232_2bde7d118300deb354bbf504cfa1dd64","io.kubernetes.
cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2bde7d118300deb354bbf504cfa1dd64"},"owner":"root"}]
	I1122 00:20:47.828773  271651 cri.go:126] list returned 8 containers
	I1122 00:20:47.828789  271651 cri.go:129] container: {ID:0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437 Status:running}
	I1122 00:20:47.828832  271651 cri.go:131] skipping 0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437 - not in ps
	I1122 00:20:47.828844  271651 cri.go:129] container: {ID:35b440e09c5a86db6c6dce68737e0b37e0d9302f51ce5fd729ac86a23bae6714 Status:created}
	I1122 00:20:47.828855  271651 cri.go:135] skipping {35b440e09c5a86db6c6dce68737e0b37e0d9302f51ce5fd729ac86a23bae6714 created}: state = "created", want "paused"
	I1122 00:20:47.828870  271651 cri.go:129] container: {ID:6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2 Status:running}
	I1122 00:20:47.828878  271651 cri.go:131] skipping 6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2 - not in ps
	I1122 00:20:47.828889  271651 cri.go:129] container: {ID:6ed8ae255eb270ce384b53a2cfa8af556d87314b9ef910c4ddf73b5057ba4cae Status:running}
	I1122 00:20:47.828896  271651 cri.go:131] skipping 6ed8ae255eb270ce384b53a2cfa8af556d87314b9ef910c4ddf73b5057ba4cae - not in ps
	I1122 00:20:47.828907  271651 cri.go:129] container: {ID:a088ba754b8b52d8b3ef3947967041e2570a6e8d27ff5de86ee8d26a638e3aa9 Status:created}
	I1122 00:20:47.828916  271651 cri.go:135] skipping {a088ba754b8b52d8b3ef3947967041e2570a6e8d27ff5de86ee8d26a638e3aa9 created}: state = "created", want "paused"
	I1122 00:20:47.828929  271651 cri.go:129] container: {ID:dd1c7227c8c4e2bf1ae891b410d84e72536ab5c3bf5218fa25044e4b2849b261 Status:stopped}
	I1122 00:20:47.828938  271651 cri.go:135] skipping {dd1c7227c8c4e2bf1ae891b410d84e72536ab5c3bf5218fa25044e4b2849b261 stopped}: state = "stopped", want "paused"
	I1122 00:20:47.828954  271651 cri.go:129] container: {ID:e5449caa295e69e90453980f9e9bb8cca5858a385c302dd4f9a74d2514f50118 Status:stopped}
	I1122 00:20:47.828966  271651 cri.go:135] skipping {e5449caa295e69e90453980f9e9bb8cca5858a385c302dd4f9a74d2514f50118 stopped}: state = "stopped", want "paused"
	I1122 00:20:47.828976  271651 cri.go:129] container: {ID:f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5 Status:running}
	I1122 00:20:47.828986  271651 cri.go:131] skipping f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5 - not in ps
	I1122 00:20:47.829046  271651 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:20:47.841076  271651 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:20:47.841097  271651 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:20:47.841145  271651 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:20:47.855332  271651 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:20:47.856667  271651 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-781232" does not appear in /home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:20:47.857597  271651 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-9059/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-781232" cluster setting kubeconfig missing "no-preload-781232" context setting]
	I1122 00:20:47.858995  271651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/kubeconfig: {Name:mk1de43c606bf9b357397ed899e71eb19bad0265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:47.861431  271651 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:20:47.873388  271651 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1122 00:20:47.873445  271651 kubeadm.go:602] duration metric: took 32.341557ms to restartPrimaryControlPlane
	I1122 00:20:47.873464  271651 kubeadm.go:403] duration metric: took 118.736228ms to StartCluster
	I1122 00:20:47.873485  271651 settings.go:142] acquiring lock: {Name:mk1d60582df8b538e3c57bd1424924e717e0072a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:47.873577  271651 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:20:47.876108  271651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/kubeconfig: {Name:mk1de43c606bf9b357397ed899e71eb19bad0265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:47.876485  271651 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:20:47.876636  271651 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:20:47.877236  271651 addons.go:70] Setting dashboard=true in profile "no-preload-781232"
	I1122 00:20:47.877267  271651 addons.go:239] Setting addon dashboard=true in "no-preload-781232"
	W1122 00:20:47.877275  271651 addons.go:248] addon dashboard should already be in state true
	I1122 00:20:47.877305  271651 host.go:66] Checking if "no-preload-781232" exists ...
	I1122 00:20:47.877817  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:47.876776  271651 config.go:182] Loaded profile config "no-preload-781232": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:47.878124  271651 addons.go:70] Setting default-storageclass=true in profile "no-preload-781232"
	I1122 00:20:47.878143  271651 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-781232"
	I1122 00:20:47.878468  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:47.878642  271651 addons.go:70] Setting storage-provisioner=true in profile "no-preload-781232"
	I1122 00:20:47.878661  271651 addons.go:239] Setting addon storage-provisioner=true in "no-preload-781232"
	W1122 00:20:47.878670  271651 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:20:47.878699  271651 host.go:66] Checking if "no-preload-781232" exists ...
	I1122 00:20:47.879157  271651 addons.go:70] Setting metrics-server=true in profile "no-preload-781232"
	I1122 00:20:47.879176  271651 addons.go:239] Setting addon metrics-server=true in "no-preload-781232"
	W1122 00:20:47.879184  271651 addons.go:248] addon metrics-server should already be in state true
	I1122 00:20:47.879209  271651 host.go:66] Checking if "no-preload-781232" exists ...
	I1122 00:20:47.879329  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:47.879791  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:47.883756  271651 out.go:179] * Verifying Kubernetes components...
	I1122 00:20:47.885139  271651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:20:47.911440  271651 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:20:47.911867  271651 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:20:47.913400  271651 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:20:47.913472  271651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:20:47.913449  271651 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:20:47.913792  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:47.915108  271651 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1122 00:20:47.915136  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:20:47.915162  271651 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:20:47.915225  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:47.916406  271651 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1122 00:20:47.916427  271651 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1122 00:20:47.916494  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:47.918308  271651 addons.go:239] Setting addon default-storageclass=true in "no-preload-781232"
	W1122 00:20:47.918331  271651 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:20:47.918361  271651 host.go:66] Checking if "no-preload-781232" exists ...
	I1122 00:20:47.918979  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:47.940025  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:47.948359  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:47.948788  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:47.955313  271651 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:20:47.955337  271651 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:20:47.955392  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:47.985983  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:48.060916  271651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:20:48.066022  271651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:20:48.071728  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:20:48.071752  271651 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:20:48.074813  271651 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1122 00:20:48.074835  271651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1122 00:20:48.092826  271651 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1122 00:20:48.092855  271651 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1122 00:20:48.093244  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:20:48.093303  271651 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:20:48.101409  271651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:20:48.111335  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:20:48.111363  271651 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:20:48.112088  271651 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1122 00:20:48.112108  271651 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1122 00:20:48.131478  271651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1122 00:20:48.133255  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:20:48.133299  271651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:20:48.153747  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:20:48.153859  271651 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:20:48.171501  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:20:48.171544  271651 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:20:48.194062  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:20:48.194089  271651 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:20:48.211682  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:20:48.211713  271651 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:20:48.225739  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:20:48.225765  271651 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:20:48.239477  271651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:20:50.102298  271651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.041269787s)
	I1122 00:20:50.102380  271651 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.036325377s)
	I1122 00:20:50.102435  271651 node_ready.go:35] waiting up to 6m0s for node "no-preload-781232" to be "Ready" ...
	I1122 00:20:50.102485  271651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.001041352s)
	I1122 00:20:50.102588  271651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.971077401s)
	I1122 00:20:50.102614  271651 addons.go:495] Verifying addon metrics-server=true in "no-preload-781232"
	I1122 00:20:50.102742  271651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.863224744s)
	I1122 00:20:50.104446  271651 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-781232 addons enable metrics-server
	
	I1122 00:20:50.111974  271651 node_ready.go:49] node "no-preload-781232" is "Ready"
	I1122 00:20:50.112013  271651 node_ready.go:38] duration metric: took 9.530547ms for node "no-preload-781232" to be "Ready" ...
	I1122 00:20:50.112029  271651 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:50.112071  271651 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:50.120338  271651 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1122 00:20:50.121455  271651 addons.go:530] duration metric: took 2.244826496s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1122 00:20:50.125729  271651 api_server.go:72] duration metric: took 2.248867678s to wait for apiserver process to appear ...
	I1122 00:20:50.125753  271651 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:50.125775  271651 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1122 00:20:50.131451  271651 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:20:50.131481  271651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:20:50.626769  271651 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1122 00:20:50.633861  271651 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:20:50.633896  271651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:20:48.132085  269458 pod_ready.go:104] pod "coredns-5dd5756b68-pqbfp" is not "Ready", error: <nil>
	W1122 00:20:50.132639  269458 pod_ready.go:104] pod "coredns-5dd5756b68-pqbfp" is not "Ready", error: <nil>
	I1122 00:20:49.288636  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:49.289177  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:49.289244  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:49.289331  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:49.318321  218693 cri.go:89] found id: "81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6"
	I1122 00:20:49.318342  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:49.318346  218693 cri.go:89] found id: ""
	I1122 00:20:49.318354  218693 logs.go:282] 2 containers: [81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:49.318404  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.322732  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.328495  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:49.328571  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:49.369571  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:49.369602  218693 cri.go:89] found id: ""
	I1122 00:20:49.369614  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:49.369892  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.376434  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:49.376520  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:49.413883  218693 cri.go:89] found id: ""
	I1122 00:20:49.413916  218693 logs.go:282] 0 containers: []
	W1122 00:20:49.413930  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:49.413938  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:49.414015  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:49.458541  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:49.458567  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:49.458579  218693 cri.go:89] found id: ""
	I1122 00:20:49.458602  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:49.458682  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.465401  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.472015  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:49.472158  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:49.518511  218693 cri.go:89] found id: ""
	I1122 00:20:49.518560  218693 logs.go:282] 0 containers: []
	W1122 00:20:49.518573  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:49.518583  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:49.518662  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:49.557146  218693 cri.go:89] found id: "718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:49.557173  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:49.557177  218693 cri.go:89] found id: ""
	I1122 00:20:49.557197  218693 logs.go:282] 2 containers: [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:49.557298  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.563058  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.568033  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:49.568107  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:49.601346  218693 cri.go:89] found id: ""
	I1122 00:20:49.601493  218693 logs.go:282] 0 containers: []
	W1122 00:20:49.601509  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:49.601519  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:49.601687  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:49.640917  218693 cri.go:89] found id: ""
	I1122 00:20:49.640948  218693 logs.go:282] 0 containers: []
	W1122 00:20:49.640961  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:49.640973  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:49.640988  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:49.777443  218693 logs.go:123] Gathering logs for kube-apiserver [81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6] ...
	I1122 00:20:49.777485  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6"
	I1122 00:20:49.821731  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:49.821777  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:49.867159  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:49.867208  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:49.911762  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:49.911806  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:49.957831  218693 logs.go:123] Gathering logs for kube-controller-manager [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f] ...
	I1122 00:20:49.957870  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:49.994873  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:49.994908  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:50.052408  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:50.052446  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:50.089867  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:50.089903  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:50.104729  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:50.104756  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:50.186784  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:50.186805  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:50.186820  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:50.251790  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:50.251823  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	461c148b52a86       56cc512116c8f       7 seconds ago       Running             busybox                   0                   58abf3cc4f7ef       busybox                                      default
	20a5c049d6f88       52546a367cc9e       13 seconds ago      Running             coredns                   0                   79a1c44c38dd2       coredns-66bc5c9577-k2k88                     kube-system
	fd511a6c62f69       6e38f40d628db       13 seconds ago      Running             storage-provisioner       0                   83f30bb381301       storage-provisioner                          kube-system
	e6438a8988dc0       409467f978b4a       24 seconds ago      Running             kindnet-cni               0                   58b068d5a59b0       kindnet-hv86p                                kube-system
	1b4ec96a638d6       fc25172553d79       24 seconds ago      Running             kube-proxy                0                   a5e9cb99d1f8b       kube-proxy-k9lgv                             kube-system
	743ad186a2850       c80c8dbafe7dd       34 seconds ago      Running             kube-controller-manager   0                   c2e39023d6150       kube-controller-manager-embed-certs-491677   kube-system
	7adf72f95bd8d       7dd6aaa1717ab       34 seconds ago      Running             kube-scheduler            0                   23f5d5b5da6a7       kube-scheduler-embed-certs-491677            kube-system
	3cb363d9d975d       5f1f5298c888d       34 seconds ago      Running             etcd                      0                   cc92e0c1ed96a       etcd-embed-certs-491677                      kube-system
	1fdd4e0b2d3b9       c3994bc696102       35 seconds ago      Running             kube-apiserver            0                   ba6a63941e3c1       kube-apiserver-embed-certs-491677            kube-system
	
	
	==> containerd <==
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.377782372Z" level=info msg="Container 20a5c049d6f880fd6113ea2bf02b866cf8da4de064a04a9dd0ead4aeb01f3296: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.380843968Z" level=info msg="CreateContainer within sandbox \"83f30bb381301870530d114dfd5080ee46d1a31476c1dba9cbd5e7d03331de1f\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"fd511a6c62f69e24313a3290e3f0e02acd3ce88feb2c9e7fe7296730e88cb3e4\""
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.381584644Z" level=info msg="StartContainer for \"fd511a6c62f69e24313a3290e3f0e02acd3ce88feb2c9e7fe7296730e88cb3e4\""
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.382598255Z" level=info msg="connecting to shim fd511a6c62f69e24313a3290e3f0e02acd3ce88feb2c9e7fe7296730e88cb3e4" address="unix:///run/containerd/s/1a5a6dfb3ec99bc05b7590ae40227f5c2f88254c47dbf8a11fc8ea58060c0391" protocol=ttrpc version=3
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.385156023Z" level=info msg="CreateContainer within sandbox \"79a1c44c38dd21a966c16984a660b22a05e77cce180b95c34134469f42ee439d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20a5c049d6f880fd6113ea2bf02b866cf8da4de064a04a9dd0ead4aeb01f3296\""
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.386001792Z" level=info msg="StartContainer for \"20a5c049d6f880fd6113ea2bf02b866cf8da4de064a04a9dd0ead4aeb01f3296\""
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.388074624Z" level=info msg="connecting to shim 20a5c049d6f880fd6113ea2bf02b866cf8da4de064a04a9dd0ead4aeb01f3296" address="unix:///run/containerd/s/da00876605a114c60d67b77308aeb878980f80f09c41784882e4d0d420d77766" protocol=ttrpc version=3
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.445770515Z" level=info msg="StartContainer for \"fd511a6c62f69e24313a3290e3f0e02acd3ce88feb2c9e7fe7296730e88cb3e4\" returns successfully"
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.458981550Z" level=info msg="StartContainer for \"20a5c049d6f880fd6113ea2bf02b866cf8da4de064a04a9dd0ead4aeb01f3296\" returns successfully"
	Nov 22 00:20:43 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:43.614864265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:7f94d7ba-76b7-4739-b7a9-81d27936e10f,Namespace:default,Attempt:0,}"
	Nov 22 00:20:43 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:43.658799501Z" level=info msg="connecting to shim 58abf3cc4f7ef81b41dcf8ee3004ce41e46ee04fa9726fe380e0c1f2c09f24bf" address="unix:///run/containerd/s/6cf216f1c5410c0243f2d47b66cd95293341038a3bb812d89f2d4f81d269e558" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:20:43 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:43.729905426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:7f94d7ba-76b7-4739-b7a9-81d27936e10f,Namespace:default,Attempt:0,} returns sandbox id \"58abf3cc4f7ef81b41dcf8ee3004ce41e46ee04fa9726fe380e0c1f2c09f24bf\""
	Nov 22 00:20:43 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:43.731835312Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.796345590Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.797112964Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.798253668Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.800058419Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.800671869Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.068789592s"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.800713412Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.804822499Z" level=info msg="CreateContainer within sandbox \"58abf3cc4f7ef81b41dcf8ee3004ce41e46ee04fa9726fe380e0c1f2c09f24bf\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.812489479Z" level=info msg="Container 461c148b52a8640242a54a0d4a15fcc93593178785f39a223711c44bc9715ad5: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.818612726Z" level=info msg="CreateContainer within sandbox \"58abf3cc4f7ef81b41dcf8ee3004ce41e46ee04fa9726fe380e0c1f2c09f24bf\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"461c148b52a8640242a54a0d4a15fcc93593178785f39a223711c44bc9715ad5\""
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.819244864Z" level=info msg="StartContainer for \"461c148b52a8640242a54a0d4a15fcc93593178785f39a223711c44bc9715ad5\""
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.820292452Z" level=info msg="connecting to shim 461c148b52a8640242a54a0d4a15fcc93593178785f39a223711c44bc9715ad5" address="unix:///run/containerd/s/6cf216f1c5410c0243f2d47b66cd95293341038a3bb812d89f2d4f81d269e558" protocol=ttrpc version=3
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.877546924Z" level=info msg="StartContainer for \"461c148b52a8640242a54a0d4a15fcc93593178785f39a223711c44bc9715ad5\" returns successfully"
	
	
	==> coredns [20a5c049d6f880fd6113ea2bf02b866cf8da4de064a04a9dd0ead4aeb01f3296] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40650 - 41397 "HINFO IN 5607469831847770391.4605121898837800457. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018921329s
	
	
	==> describe nodes <==
	Name:               embed-certs-491677
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-491677
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=embed-certs-491677
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_20_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:20:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-491677
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:20:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:20:39 +0000   Sat, 22 Nov 2025 00:20:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:20:39 +0000   Sat, 22 Nov 2025 00:20:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:20:39 +0000   Sat, 22 Nov 2025 00:20:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:20:39 +0000   Sat, 22 Nov 2025 00:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-491677
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                e204dac3-e20c-470b-b0cf-5f5980ede5c3
	  Boot ID:                    725aae03-f893-4e0b-b029-cbd3b00ccfdd
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-k2k88                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-491677                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-hv86p                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-491677             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-491677    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-k9lgv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-491677             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  30s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node embed-certs-491677 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node embed-certs-491677 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node embed-certs-491677 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node embed-certs-491677 event: Registered Node embed-certs-491677 in Controller
	  Normal  NodeReady                14s   kubelet          Node embed-certs-491677 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000865] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.410276] i8042: Warning: Keylock active
	[  +0.014947] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.495836] block sda: the capability attribute has been deprecated.
	[  +0.091740] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024333] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.452540] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [3cb363d9d975d59f674c73a72da2871248f5e1d8e260a96c1b2f8a02162d4326] <==
	{"level":"warn","ts":"2025-11-22T00:20:19.863568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.874813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.889608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.906420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.912789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.920152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.926881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.933717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.939824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.947537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.957376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.964635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.972020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.979916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.987558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.995293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.002056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.009726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.016540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.025001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.033624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.040793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.061234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.068370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.075776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33994","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:20:53 up  1:03,  0 user,  load average: 3.99, 3.47, 2.26
	Linux embed-certs-491677 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e6438a8988dc0f2e029e4fc1850eb99d1df097af6284354717b03557d9cf0e41] <==
	I1122 00:20:29.593761       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:20:29.594049       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:20:29.594206       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:20:29.594227       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:20:29.594293       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:20:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:20:29.795135       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:20:29.795177       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:20:29.795188       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:20:29.795400       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:20:30.291123       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:20:30.291153       1 metrics.go:72] Registering metrics
	I1122 00:20:30.291425       1 controller.go:711] "Syncing nftables rules"
	I1122 00:20:39.795673       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:20:39.795779       1 main.go:301] handling current node
	I1122 00:20:49.797449       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:20:49.797491       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1fdd4e0b2d3b945bcac84434220e89165d8896cdf19ffa5097bcc810d6f432fd] <==
	I1122 00:20:20.629687       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:20:20.632016       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:20:20.632023       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:20:20.637972       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:20:20.638087       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:20:20.674887       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:20:20.676894       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:20:21.533297       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:20:21.538200       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:20:21.538223       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:20:22.064334       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:20:22.106684       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:20:22.236984       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:20:22.243146       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:20:22.244297       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:20:22.248726       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:20:22.584108       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:20:23.413393       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:20:23.424460       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:20:23.434327       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:20:28.288895       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:20:28.296373       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:20:28.387332       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:20:28.436558       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1122 00:20:52.423994       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:35440: use of closed network connection
	
	
	==> kube-controller-manager [743ad186a28504b85641bc291d2966e934eca74995c9788d8acf80ce552cc12d] <==
	I1122 00:20:27.582933       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:20:27.582949       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:20:27.582979       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:20:27.583013       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:20:27.583053       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:20:27.583106       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:20:27.583121       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:20:27.583108       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:20:27.583071       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:20:27.583320       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:20:27.583341       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:20:27.583359       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:20:27.583372       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:20:27.583475       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:20:27.584240       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:20:27.584317       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:20:27.585297       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:20:27.587505       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:20:27.589771       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:20:27.589828       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:20:27.593071       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:20:27.595425       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:20:27.602716       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:20:27.606230       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:20:42.535417       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1b4ec96a638d6d78b8ac0f347162e86a708a0e53df8de231ca44c7eee2b08994] <==
	I1122 00:20:29.056133       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:20:29.131229       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:20:29.231660       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:20:29.231708       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:20:29.231831       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:20:29.311742       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:20:29.311826       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:20:29.317674       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:20:29.318116       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:20:29.318146       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:20:29.319485       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:20:29.319553       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:20:29.319530       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:20:29.319918       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:20:29.319527       1 config.go:200] "Starting service config controller"
	I1122 00:20:29.320124       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:20:29.319542       1 config.go:309] "Starting node config controller"
	I1122 00:20:29.320194       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:20:29.320203       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:20:29.420003       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:20:29.420815       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:20:29.420830       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7adf72f95bd8de9a99b3de1a9c91e0f10ca82b21b87f0ab404554319ad825707] <==
	E1122 00:20:20.606091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:20:20.606115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:20:20.606122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:20:20.606299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:20:20.606323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:20:20.606350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:20:20.606500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:20:20.606579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:20:20.606580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:20:20.606640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:20:20.606651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:20:20.606680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:20:21.500830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:20:21.505253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:20:21.529651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:20:21.531636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:20:21.602153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:20:21.619369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:20:21.766134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:20:21.796321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:20:21.837941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:20:21.846233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:20:21.864355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:20:21.889683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1122 00:20:22.200985       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:20:24 embed-certs-491677 kubelet[1434]: I1122 00:20:24.329818    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-491677" podStartSLOduration=3.329792314 podStartE2EDuration="3.329792314s" podCreationTimestamp="2025-11-22 00:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:24.319085166 +0000 UTC m=+1.153308791" watchObservedRunningTime="2025-11-22 00:20:24.329792314 +0000 UTC m=+1.164015921"
	Nov 22 00:20:24 embed-certs-491677 kubelet[1434]: I1122 00:20:24.329981    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-491677" podStartSLOduration=2.329969615 podStartE2EDuration="2.329969615s" podCreationTimestamp="2025-11-22 00:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:24.329911547 +0000 UTC m=+1.164135171" watchObservedRunningTime="2025-11-22 00:20:24.329969615 +0000 UTC m=+1.164193236"
	Nov 22 00:20:24 embed-certs-491677 kubelet[1434]: I1122 00:20:24.353590    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-491677" podStartSLOduration=1.3535689020000001 podStartE2EDuration="1.353568902s" podCreationTimestamp="2025-11-22 00:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:24.342483146 +0000 UTC m=+1.176706769" watchObservedRunningTime="2025-11-22 00:20:24.353568902 +0000 UTC m=+1.187792527"
	Nov 22 00:20:24 embed-certs-491677 kubelet[1434]: I1122 00:20:24.365825    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-491677" podStartSLOduration=1.365802228 podStartE2EDuration="1.365802228s" podCreationTimestamp="2025-11-22 00:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:24.353431294 +0000 UTC m=+1.187654938" watchObservedRunningTime="2025-11-22 00:20:24.365802228 +0000 UTC m=+1.200025852"
	Nov 22 00:20:27 embed-certs-491677 kubelet[1434]: I1122 00:20:27.601100    1434 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:20:27 embed-certs-491677 kubelet[1434]: I1122 00:20:27.601772    1434 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470464    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa71cc32-b446-45a5-b379-0bb74ac111be-kube-proxy\") pod \"kube-proxy-k9lgv\" (UID: \"aa71cc32-b446-45a5-b379-0bb74ac111be\") " pod="kube-system/kube-proxy-k9lgv"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470519    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa71cc32-b446-45a5-b379-0bb74ac111be-xtables-lock\") pod \"kube-proxy-k9lgv\" (UID: \"aa71cc32-b446-45a5-b379-0bb74ac111be\") " pod="kube-system/kube-proxy-k9lgv"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470553    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfnb6\" (UniqueName: \"kubernetes.io/projected/aa71cc32-b446-45a5-b379-0bb74ac111be-kube-api-access-lfnb6\") pod \"kube-proxy-k9lgv\" (UID: \"aa71cc32-b446-45a5-b379-0bb74ac111be\") " pod="kube-system/kube-proxy-k9lgv"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470580    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6231b935-f44b-4e7b-a240-287c22f9547b-xtables-lock\") pod \"kindnet-hv86p\" (UID: \"6231b935-f44b-4e7b-a240-287c22f9547b\") " pod="kube-system/kindnet-hv86p"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470606    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl6v4\" (UniqueName: \"kubernetes.io/projected/6231b935-f44b-4e7b-a240-287c22f9547b-kube-api-access-sl6v4\") pod \"kindnet-hv86p\" (UID: \"6231b935-f44b-4e7b-a240-287c22f9547b\") " pod="kube-system/kindnet-hv86p"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470679    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa71cc32-b446-45a5-b379-0bb74ac111be-lib-modules\") pod \"kube-proxy-k9lgv\" (UID: \"aa71cc32-b446-45a5-b379-0bb74ac111be\") " pod="kube-system/kube-proxy-k9lgv"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470768    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6231b935-f44b-4e7b-a240-287c22f9547b-cni-cfg\") pod \"kindnet-hv86p\" (UID: \"6231b935-f44b-4e7b-a240-287c22f9547b\") " pod="kube-system/kindnet-hv86p"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470816    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6231b935-f44b-4e7b-a240-287c22f9547b-lib-modules\") pod \"kindnet-hv86p\" (UID: \"6231b935-f44b-4e7b-a240-287c22f9547b\") " pod="kube-system/kindnet-hv86p"
	Nov 22 00:20:30 embed-certs-491677 kubelet[1434]: I1122 00:20:30.303829    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k9lgv" podStartSLOduration=2.303805379 podStartE2EDuration="2.303805379s" podCreationTimestamp="2025-11-22 00:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:29.300211616 +0000 UTC m=+6.134435242" watchObservedRunningTime="2025-11-22 00:20:30.303805379 +0000 UTC m=+7.138029006"
	Nov 22 00:20:30 embed-certs-491677 kubelet[1434]: I1122 00:20:30.303958    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hv86p" podStartSLOduration=2.30395145 podStartE2EDuration="2.30395145s" podCreationTimestamp="2025-11-22 00:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:30.303946832 +0000 UTC m=+7.138170454" watchObservedRunningTime="2025-11-22 00:20:30.30395145 +0000 UTC m=+7.138175087"
	Nov 22 00:20:39 embed-certs-491677 kubelet[1434]: I1122 00:20:39.876975    1434 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:20:39 embed-certs-491677 kubelet[1434]: I1122 00:20:39.960861    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59l8z\" (UniqueName: \"kubernetes.io/projected/957a225b-f96e-47aa-aea3-a77ff5b7843c-kube-api-access-59l8z\") pod \"storage-provisioner\" (UID: \"957a225b-f96e-47aa-aea3-a77ff5b7843c\") " pod="kube-system/storage-provisioner"
	Nov 22 00:20:39 embed-certs-491677 kubelet[1434]: I1122 00:20:39.960914    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5170ce81-2d67-4775-9d3e-7ba7d5b37f03-config-volume\") pod \"coredns-66bc5c9577-k2k88\" (UID: \"5170ce81-2d67-4775-9d3e-7ba7d5b37f03\") " pod="kube-system/coredns-66bc5c9577-k2k88"
	Nov 22 00:20:39 embed-certs-491677 kubelet[1434]: I1122 00:20:39.960940    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/957a225b-f96e-47aa-aea3-a77ff5b7843c-tmp\") pod \"storage-provisioner\" (UID: \"957a225b-f96e-47aa-aea3-a77ff5b7843c\") " pod="kube-system/storage-provisioner"
	Nov 22 00:20:39 embed-certs-491677 kubelet[1434]: I1122 00:20:39.960954    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb4wk\" (UniqueName: \"kubernetes.io/projected/5170ce81-2d67-4775-9d3e-7ba7d5b37f03-kube-api-access-nb4wk\") pod \"coredns-66bc5c9577-k2k88\" (UID: \"5170ce81-2d67-4775-9d3e-7ba7d5b37f03\") " pod="kube-system/coredns-66bc5c9577-k2k88"
	Nov 22 00:20:41 embed-certs-491677 kubelet[1434]: I1122 00:20:41.397951    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k2k88" podStartSLOduration=13.397927104 podStartE2EDuration="13.397927104s" podCreationTimestamp="2025-11-22 00:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:41.37628932 +0000 UTC m=+18.210512945" watchObservedRunningTime="2025-11-22 00:20:41.397927104 +0000 UTC m=+18.232150728"
	Nov 22 00:20:41 embed-certs-491677 kubelet[1434]: I1122 00:20:41.437030    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.433241818 podStartE2EDuration="12.433241818s" podCreationTimestamp="2025-11-22 00:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:41.401045761 +0000 UTC m=+18.235269385" watchObservedRunningTime="2025-11-22 00:20:41.433241818 +0000 UTC m=+18.267465442"
	Nov 22 00:20:43 embed-certs-491677 kubelet[1434]: I1122 00:20:43.389881    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssb89\" (UniqueName: \"kubernetes.io/projected/7f94d7ba-76b7-4739-b7a9-81d27936e10f-kube-api-access-ssb89\") pod \"busybox\" (UID: \"7f94d7ba-76b7-4739-b7a9-81d27936e10f\") " pod="default/busybox"
	Nov 22 00:20:46 embed-certs-491677 kubelet[1434]: I1122 00:20:46.372229    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.3019412049999999 podStartE2EDuration="3.372205905s" podCreationTimestamp="2025-11-22 00:20:43 +0000 UTC" firstStartedPulling="2025-11-22 00:20:43.731373177 +0000 UTC m=+20.565596780" lastFinishedPulling="2025-11-22 00:20:45.801637864 +0000 UTC m=+22.635861480" observedRunningTime="2025-11-22 00:20:46.371900946 +0000 UTC m=+23.206124571" watchObservedRunningTime="2025-11-22 00:20:46.372205905 +0000 UTC m=+23.206429528"
	
	
	==> storage-provisioner [fd511a6c62f69e24313a3290e3f0e02acd3ce88feb2c9e7fe7296730e88cb3e4] <==
	I1122 00:20:40.465734       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:20:40.503342       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:20:40.503400       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:20:40.525548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:40.535166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:20:40.536556       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:20:40.536736       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-491677_54c74ed4-a9d3-4c1a-a5bf-4458cd9dc8d2!
	I1122 00:20:40.544345       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b022146-00ff-4e08-8a06-2b5c1521d8c3", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-491677_54c74ed4-a9d3-4c1a-a5bf-4458cd9dc8d2 became leader
	W1122 00:20:40.552369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:40.560346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:20:40.642505       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-491677_54c74ed4-a9d3-4c1a-a5bf-4458cd9dc8d2!
	W1122 00:20:42.563894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:42.568233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:44.571281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:44.574930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:46.578883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:46.583703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:48.587499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:48.592458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:50.596971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:50.605925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:52.609331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:52.613214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-491677 -n embed-certs-491677
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-491677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-491677
helpers_test.go:243: (dbg) docker inspect embed-certs-491677:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78",
	        "Created": "2025-11-22T00:20:06.79977262Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 261687,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:20:06.837081251Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78/hostname",
	        "HostsPath": "/var/lib/docker/containers/bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78/hosts",
	        "LogPath": "/var/lib/docker/containers/bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78/bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78-json.log",
	        "Name": "/embed-certs-491677",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-491677:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-491677",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bf732b8e13b6e65820e5672638180635c1c71c51b5044b3c2ddaf571c423ad78",
	                "LowerDir": "/var/lib/docker/overlay2/100dc12db05615eaf06dacd731e94d7443e9ef5d109aa3bd6714f1cc7c88f05c-init/diff:/var/lib/docker/overlay2/4b4af9a4e857911a6b5096aeeaee227ee7577c6eff3b08bbb4e765c49ed2fb70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/100dc12db05615eaf06dacd731e94d7443e9ef5d109aa3bd6714f1cc7c88f05c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/100dc12db05615eaf06dacd731e94d7443e9ef5d109aa3bd6714f1cc7c88f05c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/100dc12db05615eaf06dacd731e94d7443e9ef5d109aa3bd6714f1cc7c88f05c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-491677",
	                "Source": "/var/lib/docker/volumes/embed-certs-491677/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-491677",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-491677",
	                "name.minikube.sigs.k8s.io": "embed-certs-491677",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8a0e6de74db17b415e812d0739f1b3e2c5f7b9c165b269bc900dac10a1423d9b",
	            "SandboxKey": "/var/run/docker/netns/8a0e6de74db1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-491677": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "46fbcd0cae5a2d811f266bf4a0cbb02e2351cfcabdc238fccca0b8241b80909e",
	                    "EndpointID": "00c565579fa87c26656a136134b70f19215cff99bd9340a5c80f45cd5c120af9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "7e:b6:61:a8:ec:b7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-491677",
	                        "bf732b8e13b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-491677 -n embed-certs-491677
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-491677 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-491677 logs -n 25: (1.369452797s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-687868 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo containerd config dump                                                                                                                                                                                                        │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ ssh     │ -p cilium-687868 sudo crio config                                                                                                                                                                                                                   │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ delete  │ -p cilium-687868                                                                                                                                                                                                                                    │ cilium-687868          │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p old-k8s-version-462319 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ ssh     │ -p NoKubernetes-714059 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ start   │ -p cert-expiration-427330 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-427330 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ delete  │ -p cert-expiration-427330                                                                                                                                                                                                                           │ cert-expiration-427330 │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p no-preload-781232 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ stop    │ -p NoKubernetes-714059                                                                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ start   │ -p NoKubernetes-714059 --driver=docker  --container-runtime=containerd                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:19 UTC │
	│ ssh     │ -p NoKubernetes-714059 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                             │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │                     │
	│ delete  │ -p NoKubernetes-714059                                                                                                                                                                                                                              │ NoKubernetes-714059    │ jenkins │ v1.37.0 │ 22 Nov 25 00:19 UTC │ 22 Nov 25 00:20 UTC │
	│ start   │ -p embed-certs-491677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-491677     │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-462319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ stop    │ -p old-k8s-version-462319 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-781232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ stop    │ -p no-preload-781232 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-462319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ start   │ -p old-k8s-version-462319 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-462319 │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-781232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │ 22 Nov 25 00:20 UTC │
	│ start   │ -p no-preload-781232 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-781232      │ jenkins │ v1.37.0 │ 22 Nov 25 00:20 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:20:40
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:20:40.886405  271651 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:20:40.886750  271651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:40.886763  271651 out.go:374] Setting ErrFile to fd 2...
	I1122 00:20:40.886771  271651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:20:40.887090  271651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:20:40.887734  271651 out.go:368] Setting JSON to false
	I1122 00:20:40.889530  271651 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3780,"bootTime":1763767061,"procs":390,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:20:40.889615  271651 start.go:143] virtualization: kvm guest
	I1122 00:20:40.891913  271651 out.go:179] * [no-preload-781232] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:20:40.893519  271651 notify.go:221] Checking for updates...
	I1122 00:20:40.893538  271651 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:20:40.895181  271651 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:20:40.896644  271651 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:20:40.898014  271651 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1122 00:20:40.899285  271651 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:20:40.900518  271651 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:20:40.902454  271651 config.go:182] Loaded profile config "no-preload-781232": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:40.903040  271651 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:20:40.929978  271651 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:20:40.930114  271651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:41.006356  271651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:20:40.993444426 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:41.006474  271651 docker.go:319] overlay module found
	I1122 00:20:41.009472  271651 out.go:179] * Using the docker driver based on existing profile
	I1122 00:20:41.010942  271651 start.go:309] selected driver: docker
	I1122 00:20:41.010966  271651 start.go:930] validating driver "docker" against &{Name:no-preload-781232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-781232 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:20:41.011087  271651 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:20:41.011879  271651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:20:41.104934  271651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:20:41.088985212 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:20:41.105442  271651 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:41.105488  271651 cni.go:84] Creating CNI manager for ""
	I1122 00:20:41.105564  271651 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:20:41.105648  271651 start.go:353] cluster config:
	{Name:no-preload-781232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-781232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:20:41.112421  271651 out.go:179] * Starting "no-preload-781232" primary control-plane node in "no-preload-781232" cluster
	I1122 00:20:41.113936  271651 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:20:41.115178  271651 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:20:41.116381  271651 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:41.116485  271651 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:20:41.116551  271651 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/config.json ...
	I1122 00:20:41.116678  271651 cache.go:107] acquiring lock: {Name:mk3cbf993e64f2a4d1538596c5feef81911b9052 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.116792  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1122 00:20:41.116831  271651 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 170.313µs
	I1122 00:20:41.116835  271651 cache.go:107] acquiring lock: {Name:mkfebe1efa2de813c1c2eb3f37a54c832bf78fd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.116887  271651 cache.go:107] acquiring lock: {Name:mk81179b55eac91a1d7e3a877c3f0b2f7481bd05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.116835  271651 cache.go:107] acquiring lock: {Name:mkeac22ae63d56187c9ebc31aef7cb1b078e1fb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.118014  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1122 00:20:41.118032  271651 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 1.144897ms
	I1122 00:20:41.118044  271651 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1122 00:20:41.116861  271651 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1122 00:20:41.116762  271651 cache.go:107] acquiring lock: {Name:mk69f6487a5dd2c7727468b62e1b8af4d70135bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.116864  271651 cache.go:107] acquiring lock: {Name:mk11527980a4bb905a3cb94827e56e2e74bc7fc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.118088  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1122 00:20:41.118087  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1122 00:20:41.118095  271651 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.233265ms
	I1122 00:20:41.116876  271651 cache.go:107] acquiring lock: {Name:mkca7af66c9bd0c8ceb77c9b6b55063268e48694 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.118116  271651 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1122 00:20:41.116921  271651 cache.go:107] acquiring lock: {Name:mk10fafdbb0634440e6c1d6dcf0e044001fbcbea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.118119  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1122 00:20:41.118129  271651 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 1.302167ms
	I1122 00:20:41.118139  271651 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1122 00:20:41.118103  271651 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.427788ms
	I1122 00:20:41.118159  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1122 00:20:41.118174  271651 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1122 00:20:41.118149  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1122 00:20:41.118187  271651 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.267947ms
	I1122 00:20:41.118187  271651 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 1.296406ms
	I1122 00:20:41.116979  271651 cache.go:115] /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1122 00:20:41.118201  271651 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1122 00:20:41.118196  271651 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1122 00:20:41.118204  271651 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.373517ms
	I1122 00:20:41.118213  271651 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1122 00:20:41.118222  271651 cache.go:87] Successfully saved all images to host disk.
	I1122 00:20:41.145996  271651 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:20:41.146026  271651 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:20:41.146042  271651 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:20:41.146078  271651 start.go:360] acquireMachinesLock for no-preload-781232: {Name:mkbfbcb44f7f9e1c764fa85467f8afec16e3b56f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:20:41.146137  271651 start.go:364] duration metric: took 40.701µs to acquireMachinesLock for "no-preload-781232"
	I1122 00:20:41.146156  271651 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:20:41.146163  271651 fix.go:54] fixHost starting: 
	I1122 00:20:41.146490  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:41.172390  271651 fix.go:112] recreateIfNeeded on no-preload-781232: state=Stopped err=<nil>
	W1122 00:20:41.172431  271651 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:20:38.063632  269458 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1122 00:20:38.063720  269458 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1122 00:20:38.063790  269458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-462319
	I1122 00:20:38.067657  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:20:38.067693  269458 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:20:38.067761  269458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-462319
	I1122 00:20:38.094915  269458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/old-k8s-version-462319/id_rsa Username:docker}
	I1122 00:20:38.096830  269458 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:20:38.096852  269458 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:20:38.096909  269458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-462319
	I1122 00:20:38.103367  269458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/old-k8s-version-462319/id_rsa Username:docker}
	I1122 00:20:38.108420  269458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/old-k8s-version-462319/id_rsa Username:docker}
	I1122 00:20:38.125718  269458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/old-k8s-version-462319/id_rsa Username:docker}
	I1122 00:20:38.206740  269458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:20:38.218490  269458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:20:38.224473  269458 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-462319" to be "Ready" ...
	I1122 00:20:38.234022  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:20:38.234050  269458 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:20:38.235622  269458 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1122 00:20:38.235645  269458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1122 00:20:38.244718  269458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:20:38.254596  269458 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1122 00:20:38.254626  269458 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1122 00:20:38.255591  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:20:38.255618  269458 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:20:38.272579  269458 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1122 00:20:38.272618  269458 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1122 00:20:38.276352  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:20:38.276375  269458 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:20:38.294975  269458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1122 00:20:38.300369  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:20:38.300394  269458 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:20:38.323115  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:20:38.323154  269458 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:20:38.346754  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:20:38.346784  269458 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:20:38.365277  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:20:38.365303  269458 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:20:38.385806  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:20:38.385833  269458 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:20:38.404309  269458 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:20:38.404343  269458 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:20:38.417900  269458 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:20:40.488636  269458 node_ready.go:49] node "old-k8s-version-462319" is "Ready"
	I1122 00:20:40.488671  269458 node_ready.go:38] duration metric: took 2.264006021s for node "old-k8s-version-462319" to be "Ready" ...
	I1122 00:20:40.488689  269458 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:40.488748  269458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1122 00:20:37.882714  260527 node_ready.go:57] node "embed-certs-491677" has "Ready":"False" status (will retry)
	I1122 00:20:40.382005  260527 node_ready.go:49] node "embed-certs-491677" is "Ready"
	I1122 00:20:40.382048  260527 node_ready.go:38] duration metric: took 11.503562001s for node "embed-certs-491677" to be "Ready" ...
	I1122 00:20:40.382069  260527 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:40.382127  260527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:40.401330  260527 api_server.go:72] duration metric: took 11.90660159s to wait for apiserver process to appear ...
	I1122 00:20:40.401364  260527 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:40.401388  260527 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:20:40.406625  260527 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:20:40.408004  260527 api_server.go:141] control plane version: v1.34.1
	I1122 00:20:40.408039  260527 api_server.go:131] duration metric: took 6.665705ms to wait for apiserver health ...
	I1122 00:20:40.408050  260527 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:20:40.412838  260527 system_pods.go:59] 8 kube-system pods found
	I1122 00:20:40.412972  260527 system_pods.go:61] "coredns-66bc5c9577-k2k88" [5170ce81-2d67-4775-9d3e-7ba7d5b37f03] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:40.412999  260527 system_pods.go:61] "etcd-embed-certs-491677" [c1a339bc-3e3b-4a29-b843-3bddd31ce5d7] Running
	I1122 00:20:40.413012  260527 system_pods.go:61] "kindnet-hv86p" [6231b935-f44b-4e7b-a240-287c22f9547b] Running
	I1122 00:20:40.413033  260527 system_pods.go:61] "kube-apiserver-embed-certs-491677" [b0fe5ce2-fabe-4f5f-87d9-a8775ed9324e] Running
	I1122 00:20:40.413043  260527 system_pods.go:61] "kube-controller-manager-embed-certs-491677" [bbc77c2e-0f6d-4ffa-9d92-8d82a0a96146] Running
	I1122 00:20:40.413049  260527 system_pods.go:61] "kube-proxy-k9lgv" [aa71cc32-b446-45a5-b379-0bb74ac111be] Running
	I1122 00:20:40.413060  260527 system_pods.go:61] "kube-scheduler-embed-certs-491677" [ae065e3d-a671-48ea-8c1e-aa1d1cb0eb3e] Running
	I1122 00:20:40.413076  260527 system_pods.go:61] "storage-provisioner" [957a225b-f96e-47aa-aea3-a77ff5b7843c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:40.413093  260527 system_pods.go:74] duration metric: took 5.035521ms to wait for pod list to return data ...
	I1122 00:20:40.413108  260527 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:20:40.415623  260527 default_sa.go:45] found service account: "default"
	I1122 00:20:40.415650  260527 default_sa.go:55] duration metric: took 2.533838ms for default service account to be created ...
	I1122 00:20:40.415662  260527 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:20:40.419678  260527 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:40.419720  260527 system_pods.go:89] "coredns-66bc5c9577-k2k88" [5170ce81-2d67-4775-9d3e-7ba7d5b37f03] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:40.419728  260527 system_pods.go:89] "etcd-embed-certs-491677" [c1a339bc-3e3b-4a29-b843-3bddd31ce5d7] Running
	I1122 00:20:40.419744  260527 system_pods.go:89] "kindnet-hv86p" [6231b935-f44b-4e7b-a240-287c22f9547b] Running
	I1122 00:20:40.419749  260527 system_pods.go:89] "kube-apiserver-embed-certs-491677" [b0fe5ce2-fabe-4f5f-87d9-a8775ed9324e] Running
	I1122 00:20:40.419754  260527 system_pods.go:89] "kube-controller-manager-embed-certs-491677" [bbc77c2e-0f6d-4ffa-9d92-8d82a0a96146] Running
	I1122 00:20:40.419759  260527 system_pods.go:89] "kube-proxy-k9lgv" [aa71cc32-b446-45a5-b379-0bb74ac111be] Running
	I1122 00:20:40.419765  260527 system_pods.go:89] "kube-scheduler-embed-certs-491677" [ae065e3d-a671-48ea-8c1e-aa1d1cb0eb3e] Running
	I1122 00:20:40.419771  260527 system_pods.go:89] "storage-provisioner" [957a225b-f96e-47aa-aea3-a77ff5b7843c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:40.419798  260527 retry.go:31] will retry after 270.438071ms: missing components: kube-dns
	I1122 00:20:40.695698  260527 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:40.695763  260527 system_pods.go:89] "coredns-66bc5c9577-k2k88" [5170ce81-2d67-4775-9d3e-7ba7d5b37f03] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:40.695773  260527 system_pods.go:89] "etcd-embed-certs-491677" [c1a339bc-3e3b-4a29-b843-3bddd31ce5d7] Running
	I1122 00:20:40.695783  260527 system_pods.go:89] "kindnet-hv86p" [6231b935-f44b-4e7b-a240-287c22f9547b] Running
	I1122 00:20:40.695793  260527 system_pods.go:89] "kube-apiserver-embed-certs-491677" [b0fe5ce2-fabe-4f5f-87d9-a8775ed9324e] Running
	I1122 00:20:40.695799  260527 system_pods.go:89] "kube-controller-manager-embed-certs-491677" [bbc77c2e-0f6d-4ffa-9d92-8d82a0a96146] Running
	I1122 00:20:40.695807  260527 system_pods.go:89] "kube-proxy-k9lgv" [aa71cc32-b446-45a5-b379-0bb74ac111be] Running
	I1122 00:20:40.695812  260527 system_pods.go:89] "kube-scheduler-embed-certs-491677" [ae065e3d-a671-48ea-8c1e-aa1d1cb0eb3e] Running
	I1122 00:20:40.695821  260527 system_pods.go:89] "storage-provisioner" [957a225b-f96e-47aa-aea3-a77ff5b7843c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:40.695838  260527 retry.go:31] will retry after 368.508675ms: missing components: kube-dns
	I1122 00:20:41.074998  260527 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:41.075042  260527 system_pods.go:89] "coredns-66bc5c9577-k2k88" [5170ce81-2d67-4775-9d3e-7ba7d5b37f03] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:41.075190  260527 system_pods.go:89] "etcd-embed-certs-491677" [c1a339bc-3e3b-4a29-b843-3bddd31ce5d7] Running
	I1122 00:20:41.075204  260527 system_pods.go:89] "kindnet-hv86p" [6231b935-f44b-4e7b-a240-287c22f9547b] Running
	I1122 00:20:41.075288  260527 system_pods.go:89] "kube-apiserver-embed-certs-491677" [b0fe5ce2-fabe-4f5f-87d9-a8775ed9324e] Running
	I1122 00:20:41.075298  260527 system_pods.go:89] "kube-controller-manager-embed-certs-491677" [bbc77c2e-0f6d-4ffa-9d92-8d82a0a96146] Running
	I1122 00:20:41.075303  260527 system_pods.go:89] "kube-proxy-k9lgv" [aa71cc32-b446-45a5-b379-0bb74ac111be] Running
	I1122 00:20:41.075317  260527 system_pods.go:89] "kube-scheduler-embed-certs-491677" [ae065e3d-a671-48ea-8c1e-aa1d1cb0eb3e] Running
	I1122 00:20:41.075325  260527 system_pods.go:89] "storage-provisioner" [957a225b-f96e-47aa-aea3-a77ff5b7843c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:41.075473  260527 retry.go:31] will retry after 369.288531ms: missing components: kube-dns
	I1122 00:20:41.454047  260527 system_pods.go:86] 8 kube-system pods found
	I1122 00:20:41.454089  260527 system_pods.go:89] "coredns-66bc5c9577-k2k88" [5170ce81-2d67-4775-9d3e-7ba7d5b37f03] Running
	I1122 00:20:41.454098  260527 system_pods.go:89] "etcd-embed-certs-491677" [c1a339bc-3e3b-4a29-b843-3bddd31ce5d7] Running
	I1122 00:20:41.454104  260527 system_pods.go:89] "kindnet-hv86p" [6231b935-f44b-4e7b-a240-287c22f9547b] Running
	I1122 00:20:41.454110  260527 system_pods.go:89] "kube-apiserver-embed-certs-491677" [b0fe5ce2-fabe-4f5f-87d9-a8775ed9324e] Running
	I1122 00:20:41.454121  260527 system_pods.go:89] "kube-controller-manager-embed-certs-491677" [bbc77c2e-0f6d-4ffa-9d92-8d82a0a96146] Running
	I1122 00:20:41.454127  260527 system_pods.go:89] "kube-proxy-k9lgv" [aa71cc32-b446-45a5-b379-0bb74ac111be] Running
	I1122 00:20:41.454131  260527 system_pods.go:89] "kube-scheduler-embed-certs-491677" [ae065e3d-a671-48ea-8c1e-aa1d1cb0eb3e] Running
	I1122 00:20:41.454136  260527 system_pods.go:89] "storage-provisioner" [957a225b-f96e-47aa-aea3-a77ff5b7843c] Running
	I1122 00:20:41.454148  260527 system_pods.go:126] duration metric: took 1.038478177s to wait for k8s-apps to be running ...
	I1122 00:20:41.454162  260527 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:20:41.454609  260527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:41.479978  260527 system_svc.go:56] duration metric: took 25.803347ms WaitForService to wait for kubelet
	I1122 00:20:41.480012  260527 kubeadm.go:587] duration metric: took 12.985287639s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:41.480169  260527 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:20:41.483525  260527 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:20:41.483555  260527 node_conditions.go:123] node cpu capacity is 8
	I1122 00:20:41.483573  260527 node_conditions.go:105] duration metric: took 3.388568ms to run NodePressure ...
	I1122 00:20:41.483589  260527 start.go:242] waiting for startup goroutines ...
	I1122 00:20:41.483598  260527 start.go:247] waiting for cluster config update ...
	I1122 00:20:41.483611  260527 start.go:256] writing updated cluster config ...
	I1122 00:20:41.484070  260527 ssh_runner.go:195] Run: rm -f paused
	I1122 00:20:41.490981  260527 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:41.422839  269458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.178083885s)
	I1122 00:20:41.425002  269458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.206471244s)
	I1122 00:20:41.505527  269458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.210507816s)
	I1122 00:20:41.505568  269458 addons.go:495] Verifying addon metrics-server=true in "old-k8s-version-462319"
	I1122 00:20:42.075153  269458 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.65720666s)
	I1122 00:20:42.075599  269458 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.586814064s)
	I1122 00:20:42.075634  269458 api_server.go:72] duration metric: took 4.051404168s to wait for apiserver process to appear ...
	I1122 00:20:42.075641  269458 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:42.075680  269458 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1122 00:20:42.080775  269458 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-462319 addons enable metrics-server
	
	I1122 00:20:42.082895  269458 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1122 00:20:42.085028  269458 api_server.go:141] control plane version: v1.28.0
	I1122 00:20:42.085061  269458 api_server.go:131] duration metric: took 9.41083ms to wait for apiserver health ...
	I1122 00:20:42.085072  269458 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:20:42.085122  269458 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1122 00:20:40.938726  218693 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.062373977s)
	W1122 00:20:40.938788  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1122 00:20:40.938803  218693 logs.go:123] Gathering logs for kube-apiserver [81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6] ...
	I1122 00:20:40.938819  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6"
	I1122 00:20:40.987385  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:40.987423  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:41.040162  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:41.040209  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:41.104994  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:41.105033  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:41.151516  218693 logs.go:123] Gathering logs for kube-controller-manager [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f] ...
	I1122 00:20:41.151548  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:41.190029  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:41.190069  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:41.257775  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:41.257820  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:41.419761  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:41.419803  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:41.509160  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:41.509201  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:41.551859  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:41.551894  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:41.599128  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:41.599167  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:41.552193  260527 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k2k88" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.559444  260527 pod_ready.go:94] pod "coredns-66bc5c9577-k2k88" is "Ready"
	I1122 00:20:41.559475  260527 pod_ready.go:86] duration metric: took 7.254295ms for pod "coredns-66bc5c9577-k2k88" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.563163  260527 pod_ready.go:83] waiting for pod "etcd-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.569374  260527 pod_ready.go:94] pod "etcd-embed-certs-491677" is "Ready"
	I1122 00:20:41.569405  260527 pod_ready.go:86] duration metric: took 6.207246ms for pod "etcd-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.572654  260527 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.578166  260527 pod_ready.go:94] pod "kube-apiserver-embed-certs-491677" is "Ready"
	I1122 00:20:41.578197  260527 pod_ready.go:86] duration metric: took 5.508968ms for pod "kube-apiserver-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.581493  260527 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:41.897425  260527 pod_ready.go:94] pod "kube-controller-manager-embed-certs-491677" is "Ready"
	I1122 00:20:41.897549  260527 pod_ready.go:86] duration metric: took 316.026753ms for pod "kube-controller-manager-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:42.095784  260527 pod_ready.go:83] waiting for pod "kube-proxy-k9lgv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:42.496147  260527 pod_ready.go:94] pod "kube-proxy-k9lgv" is "Ready"
	I1122 00:20:42.496186  260527 pod_ready.go:86] duration metric: took 400.373365ms for pod "kube-proxy-k9lgv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:42.697075  260527 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:43.096481  260527 pod_ready.go:94] pod "kube-scheduler-embed-certs-491677" is "Ready"
	I1122 00:20:43.096511  260527 pod_ready.go:86] duration metric: took 399.407479ms for pod "kube-scheduler-embed-certs-491677" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:20:43.096527  260527 pod_ready.go:40] duration metric: took 1.60549947s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:43.142523  260527 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:20:43.144413  260527 out.go:179] * Done! kubectl is now configured to use "embed-certs-491677" cluster and "default" namespace by default
	I1122 00:20:41.174971  271651 out.go:252] * Restarting existing docker container for "no-preload-781232" ...
	I1122 00:20:41.175094  271651 cli_runner.go:164] Run: docker start no-preload-781232
	I1122 00:20:41.602742  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:41.631107  271651 kic.go:430] container "no-preload-781232" state is running.
	I1122 00:20:41.632580  271651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-781232
	I1122 00:20:41.666003  271651 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/config.json ...
	I1122 00:20:41.666250  271651 machine.go:94] provisionDockerMachine start ...
	I1122 00:20:41.666331  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:41.694892  271651 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:41.695305  271651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1122 00:20:41.695325  271651 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:20:41.696539  271651 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53028->127.0.0.1:33083: read: connection reset by peer
	I1122 00:20:44.822620  271651 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-781232
	
	I1122 00:20:44.822651  271651 ubuntu.go:182] provisioning hostname "no-preload-781232"
	I1122 00:20:44.822782  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:44.841668  271651 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:44.841894  271651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1122 00:20:44.841913  271651 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-781232 && echo "no-preload-781232" | sudo tee /etc/hostname
	I1122 00:20:44.978746  271651 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-781232
	
	I1122 00:20:44.978833  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:44.999248  271651 main.go:143] libmachine: Using SSH client type: native
	I1122 00:20:44.999578  271651 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I1122 00:20:44.999605  271651 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-781232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-781232/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-781232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:20:45.126381  271651 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:20:45.126418  271651 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9059/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9059/.minikube}
	I1122 00:20:45.126494  271651 ubuntu.go:190] setting up certificates
	I1122 00:20:45.126507  271651 provision.go:84] configureAuth start
	I1122 00:20:45.126584  271651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-781232
	I1122 00:20:45.147608  271651 provision.go:143] copyHostCerts
	I1122 00:20:45.147684  271651 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem, removing ...
	I1122 00:20:45.147704  271651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem
	I1122 00:20:45.147772  271651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem (1082 bytes)
	I1122 00:20:45.147914  271651 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem, removing ...
	I1122 00:20:45.147932  271651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem
	I1122 00:20:45.147966  271651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem (1123 bytes)
	I1122 00:20:45.148050  271651 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem, removing ...
	I1122 00:20:45.148064  271651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem
	I1122 00:20:45.148090  271651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem (1679 bytes)
	I1122 00:20:45.148174  271651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem org=jenkins.no-preload-781232 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-781232]
	I1122 00:20:45.216841  271651 provision.go:177] copyRemoteCerts
	I1122 00:20:45.216897  271651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:20:45.216931  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:45.236085  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:45.330063  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1122 00:20:45.349841  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:20:45.369530  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:20:45.389053  271651 provision.go:87] duration metric: took 262.532523ms to configureAuth
	I1122 00:20:45.389081  271651 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:20:45.389285  271651 config.go:182] Loaded profile config "no-preload-781232": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:45.389299  271651 machine.go:97] duration metric: took 3.723023836s to provisionDockerMachine
	I1122 00:20:45.389308  271651 start.go:293] postStartSetup for "no-preload-781232" (driver="docker")
	I1122 00:20:45.389316  271651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:20:45.389386  271651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:20:45.389430  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:45.409210  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:45.502549  271651 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:20:45.506540  271651 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:20:45.506564  271651 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:20:45.506575  271651 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/addons for local assets ...
	I1122 00:20:45.506651  271651 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/files for local assets ...
	I1122 00:20:45.506725  271651 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem -> 145302.pem in /etc/ssl/certs
	I1122 00:20:45.506816  271651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:20:45.515318  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:20:45.533908  271651 start.go:296] duration metric: took 144.585465ms for postStartSetup
	I1122 00:20:45.534028  271651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:20:45.534124  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:45.554579  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:45.644942  271651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:20:45.649876  271651 fix.go:56] duration metric: took 4.503705231s for fixHost
	I1122 00:20:45.649904  271651 start.go:83] releasing machines lock for "no-preload-781232", held for 4.503755569s
	I1122 00:20:45.650004  271651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-781232
	I1122 00:20:45.670195  271651 ssh_runner.go:195] Run: cat /version.json
	I1122 00:20:45.670306  271651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:20:45.670312  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:45.670356  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:45.690464  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:45.691547  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:45.781325  271651 ssh_runner.go:195] Run: systemctl --version
	I1122 00:20:45.847568  271651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:20:45.852509  271651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:20:45.852580  271651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:20:45.861868  271651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:20:45.861894  271651 start.go:496] detecting cgroup driver to use...
	I1122 00:20:45.861942  271651 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:20:45.862001  271651 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:20:45.883582  271651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:20:42.086410  269458 addons.go:530] duration metric: took 4.062719509s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1122 00:20:42.090401  269458 system_pods.go:59] 9 kube-system pods found
	I1122 00:20:42.090448  269458 system_pods.go:61] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:42.090462  269458 system_pods.go:61] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:20:42.090489  269458 system_pods.go:61] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:20:42.090499  269458 system_pods.go:61] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:20:42.090512  269458 system_pods.go:61] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:20:42.090521  269458 system_pods.go:61] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:20:42.090533  269458 system_pods.go:61] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:20:42.090542  269458 system_pods.go:61] "metrics-server-57f55c9bc5-m2z8b" [d6d9bc49-d78b-4c7d-9bda-04e70f660290] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1122 00:20:42.090549  269458 system_pods.go:61] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:42.090558  269458 system_pods.go:74] duration metric: took 5.478417ms to wait for pod list to return data ...
	I1122 00:20:42.090570  269458 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:20:42.092794  269458 default_sa.go:45] found service account: "default"
	I1122 00:20:42.092813  269458 default_sa.go:55] duration metric: took 2.232935ms for default service account to be created ...
	I1122 00:20:42.092821  269458 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:20:42.096929  269458 system_pods.go:86] 9 kube-system pods found
	I1122 00:20:42.096960  269458 system_pods.go:89] "coredns-5dd5756b68-pqbfp" [44750e8d-5eeb-4845-9029-a58cbf976b62] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:20:42.096971  269458 system_pods.go:89] "etcd-old-k8s-version-462319" [9580468b-aa0f-4d73-9c35-f9cc4c817cdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:20:42.096982  269458 system_pods.go:89] "kindnet-ldtd8" [6bf161d2-c442-466d-98b8-c313a127bf22] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:20:42.096999  269458 system_pods.go:89] "kube-apiserver-old-k8s-version-462319" [2f4b6fd0-2929-448d-820c-aabf2a9d4744] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:20:42.097013  269458 system_pods.go:89] "kube-controller-manager-old-k8s-version-462319" [83b4a291-8bac-4581-b4a6-80471e7228eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:20:42.097020  269458 system_pods.go:89] "kube-proxy-kqrng" [643cd348-4af3-4720-af0d-e931f184742c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:20:42.097025  269458 system_pods.go:89] "kube-scheduler-old-k8s-version-462319" [c1dc982d-cc79-4df6-bdc4-7e47f5d5236c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:20:42.097030  269458 system_pods.go:89] "metrics-server-57f55c9bc5-m2z8b" [d6d9bc49-d78b-4c7d-9bda-04e70f660290] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1122 00:20:42.097039  269458 system_pods.go:89] "storage-provisioner" [fc0f2774-324d-4c1a-97b7-d3e3d30ea8b2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:20:42.097046  269458 system_pods.go:126] duration metric: took 4.219175ms to wait for k8s-apps to be running ...
	I1122 00:20:42.097053  269458 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:20:42.097104  269458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:20:42.112045  269458 system_svc.go:56] duration metric: took 14.981035ms WaitForService to wait for kubelet
	I1122 00:20:42.112078  269458 kubeadm.go:587] duration metric: took 4.087849153s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:20:42.112098  269458 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:20:42.114860  269458 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:20:42.114884  269458 node_conditions.go:123] node cpu capacity is 8
	I1122 00:20:42.114898  269458 node_conditions.go:105] duration metric: took 2.795002ms to run NodePressure ...
	I1122 00:20:42.114914  269458 start.go:242] waiting for startup goroutines ...
	I1122 00:20:42.114925  269458 start.go:247] waiting for cluster config update ...
	I1122 00:20:42.114938  269458 start.go:256] writing updated cluster config ...
	I1122 00:20:42.115180  269458 ssh_runner.go:195] Run: rm -f paused
	I1122 00:20:42.119502  269458 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:20:42.124404  269458 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-pqbfp" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:20:44.130807  269458 pod_ready.go:104] pod "coredns-5dd5756b68-pqbfp" is not "Ready", error: <nil>
	W1122 00:20:46.131721  269458 pod_ready.go:104] pod "coredns-5dd5756b68-pqbfp" is not "Ready", error: <nil>
	I1122 00:20:45.898007  271651 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:20:45.898082  271651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:20:45.916642  271651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:20:45.932424  271651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:20:46.029899  271651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:20:46.129762  271651 docker.go:234] disabling docker service ...
	I1122 00:20:46.129828  271651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:20:46.147182  271651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:20:46.162484  271651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:20:46.254302  271651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:20:46.343398  271651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:20:46.357762  271651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:20:46.374877  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:20:46.384787  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:20:46.394901  271651 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1122 00:20:46.394976  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1122 00:20:46.405203  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:20:46.416542  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:20:46.426018  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:20:46.436068  271651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:20:46.446031  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:20:46.456283  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:20:46.466098  271651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:20:46.475564  271651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:20:46.483749  271651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:20:46.492209  271651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:20:46.580614  271651 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:20:46.690901  271651 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:20:46.691011  271651 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:20:46.695597  271651 start.go:564] Will wait 60s for crictl version
	I1122 00:20:46.695674  271651 ssh_runner.go:195] Run: which crictl
	I1122 00:20:46.700386  271651 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:20:46.730033  271651 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:20:46.730103  271651 ssh_runner.go:195] Run: containerd --version
	I1122 00:20:46.753517  271651 ssh_runner.go:195] Run: containerd --version
	I1122 00:20:46.779524  271651 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1122 00:20:44.162068  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:45.910683  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:46722->192.168.76.2:8443: read: connection reset by peer
	I1122 00:20:45.910762  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:45.910821  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:45.942961  218693 cri.go:89] found id: "81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6"
	I1122 00:20:45.943053  218693 cri.go:89] found id: "031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:45.943063  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:45.943084  218693 cri.go:89] found id: ""
	I1122 00:20:45.943095  218693 logs.go:282] 3 containers: [81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:45.943203  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:45.948859  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:45.952999  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:45.957041  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:45.957122  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:45.988998  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:45.989018  218693 cri.go:89] found id: ""
	I1122 00:20:45.989026  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:45.989073  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:45.993507  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:45.993569  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:46.021440  218693 cri.go:89] found id: ""
	I1122 00:20:46.021465  218693 logs.go:282] 0 containers: []
	W1122 00:20:46.021477  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:46.021485  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:46.021548  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:46.051857  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:46.051885  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:46.051889  218693 cri.go:89] found id: ""
	I1122 00:20:46.051921  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:46.051968  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:46.056981  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:46.061726  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:46.061802  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:46.096128  218693 cri.go:89] found id: ""
	I1122 00:20:46.096172  218693 logs.go:282] 0 containers: []
	W1122 00:20:46.096184  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:46.096194  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:46.096271  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:46.123687  218693 cri.go:89] found id: "718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:46.123714  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:46.123720  218693 cri.go:89] found id: ""
	I1122 00:20:46.123729  218693 logs.go:282] 2 containers: [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:46.123790  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:46.128818  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:46.133506  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:46.133581  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:46.162071  218693 cri.go:89] found id: ""
	I1122 00:20:46.162099  218693 logs.go:282] 0 containers: []
	W1122 00:20:46.162107  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:46.162119  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:46.162178  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:46.197735  218693 cri.go:89] found id: ""
	I1122 00:20:46.197772  218693 logs.go:282] 0 containers: []
	W1122 00:20:46.197787  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:46.197800  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:46.197816  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:46.256663  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:46.256690  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:46.301782  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:46.301819  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:46.335279  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:46.335311  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:46.388372  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:46.388402  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:46.486723  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:46.486756  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:46.500905  218693 logs.go:123] Gathering logs for kube-apiserver [81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6] ...
	I1122 00:20:46.500936  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6"
	I1122 00:20:46.540691  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:46.540721  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:46.575433  218693 logs.go:123] Gathering logs for kube-controller-manager [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f] ...
	I1122 00:20:46.575465  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:46.606747  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:46.606776  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:46.641596  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:46.641630  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:46.714363  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:46.714390  218693 logs.go:123] Gathering logs for kube-apiserver [031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d] ...
	I1122 00:20:46.714405  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 031d1abe53b88560dbac18645a9d04e01621af0ebc4bdacf93ba7cd987bdbc7d"
	I1122 00:20:46.751379  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:46.751411  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:46.780860  271651 cli_runner.go:164] Run: docker network inspect no-preload-781232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:20:46.800096  271651 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1122 00:20:46.804435  271651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:20:46.815135  271651 kubeadm.go:884] updating cluster {Name:no-preload-781232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-781232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:20:46.815300  271651 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:20:46.815354  271651 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:20:46.841055  271651 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:20:46.841078  271651 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:20:46.841085  271651 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1122 00:20:46.841185  271651 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-781232 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-781232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:20:46.841246  271651 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:20:46.869512  271651 cni.go:84] Creating CNI manager for ""
	I1122 00:20:46.869537  271651 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:20:46.869558  271651 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:20:46.869579  271651 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-781232 NodeName:no-preload-781232 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:20:46.869707  271651 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-781232"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:20:46.869766  271651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:20:46.879172  271651 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:20:46.879246  271651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:20:46.888577  271651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1122 00:20:46.901776  271651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:20:46.916546  271651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1122 00:20:46.929837  271651 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:20:46.933840  271651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:20:46.944382  271651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:20:47.027162  271651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:20:47.053782  271651 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232 for IP: 192.168.94.2
	I1122 00:20:47.053805  271651 certs.go:195] generating shared ca certs ...
	I1122 00:20:47.053826  271651 certs.go:227] acquiring lock for ca certs: {Name:mkcee17f48cab2703d4de8a78a6fb8af44d9e7e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:47.054017  271651 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.key
	I1122 00:20:47.054073  271651 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.key
	I1122 00:20:47.054095  271651 certs.go:257] generating profile certs ...
	I1122 00:20:47.054221  271651 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/client.key
	I1122 00:20:47.054337  271651 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/apiserver.key.80216c10
	I1122 00:20:47.054412  271651 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/proxy-client.key
	I1122 00:20:47.054552  271651 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530.pem (1338 bytes)
	W1122 00:20:47.054609  271651 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530_empty.pem, impossibly tiny 0 bytes
	I1122 00:20:47.054623  271651 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem (1675 bytes)
	I1122 00:20:47.054660  271651 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem (1082 bytes)
	I1122 00:20:47.054695  271651 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:20:47.054737  271651 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem (1679 bytes)
	I1122 00:20:47.054803  271651 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:20:47.056310  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:20:47.077024  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:20:47.097417  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:20:47.118489  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1122 00:20:47.143382  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1122 00:20:47.167131  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1122 00:20:47.187237  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:20:47.206197  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:20:47.224793  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530.pem --> /usr/share/ca-certificates/14530.pem (1338 bytes)
	I1122 00:20:47.243726  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /usr/share/ca-certificates/145302.pem (1708 bytes)
	I1122 00:20:47.263970  271651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:20:47.284711  271651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:20:47.297960  271651 ssh_runner.go:195] Run: openssl version
	I1122 00:20:47.305462  271651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145302.pem && ln -fs /usr/share/ca-certificates/145302.pem /etc/ssl/certs/145302.pem"
	I1122 00:20:47.315837  271651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145302.pem
	I1122 00:20:47.321286  271651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145302.pem
	I1122 00:20:47.321360  271651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145302.pem
	I1122 00:20:47.359997  271651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145302.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:20:47.369513  271651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:20:47.378451  271651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:20:47.382473  271651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:20:47.382531  271651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:20:47.418380  271651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:20:47.427427  271651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14530.pem && ln -fs /usr/share/ca-certificates/14530.pem /etc/ssl/certs/14530.pem"
	I1122 00:20:47.436726  271651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14530.pem
	I1122 00:20:47.440941  271651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14530.pem
	I1122 00:20:47.441009  271651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14530.pem
	I1122 00:20:47.476237  271651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14530.pem /etc/ssl/certs/51391683.0"
	I1122 00:20:47.485111  271651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:20:47.489344  271651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:20:47.525023  271651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:20:47.560451  271651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:20:47.600419  271651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:20:47.659553  271651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:20:47.709346  271651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:20:47.754761  271651 kubeadm.go:401] StartCluster: {Name:no-preload-781232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-781232 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:20:47.754852  271651 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:20:47.754919  271651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:20:47.799187  271651 cri.go:89] found id: "e5449caa295e69e90453980f9e9bb8cca5858a385c302dd4f9a74d2514f50118"
	I1122 00:20:47.799210  271651 cri.go:89] found id: "dd1c7227c8c4e2bf1ae891b410d84e72536ab5c3bf5218fa25044e4b2849b261"
	I1122 00:20:47.799223  271651 cri.go:89] found id: "35b440e09c5a86db6c6dce68737e0b37e0d9302f51ce5fd729ac86a23bae6714"
	I1122 00:20:47.799228  271651 cri.go:89] found id: "a088ba754b8b52d8b3ef3947967041e2570a6e8d27ff5de86ee8d26a638e3aa9"
	I1122 00:20:47.799232  271651 cri.go:89] found id: "b61337c7649d1c8ad6db13120b3d0c9730687561de6dd7c132264eba4d1070be"
	I1122 00:20:47.799237  271651 cri.go:89] found id: "a8df28ee53bb60379874726c9a896717f75e12fd13a7316e60ad11da58feca4a"
	I1122 00:20:47.799241  271651 cri.go:89] found id: "304e6535bf7bedf2a516b8d232b19d3e038abaca4c8c450355eade98b387f580"
	I1122 00:20:47.799246  271651 cri.go:89] found id: "2b0f0e4e1df6d003c1fd5d63a2d88caf527a5828be1e719b714f70bf70e013e6"
	I1122 00:20:47.799250  271651 cri.go:89] found id: "13c5477f80d07937f3038c381810143f379c1a5724ad58b9f212e7d95e199ef6"
	I1122 00:20:47.799274  271651 cri.go:89] found id: "6b02e9e9a07928c42cf1e5bb58d45de4ce420454640d91b3f098f98aa2f59ca6"
	I1122 00:20:47.799280  271651 cri.go:89] found id: "7f1227117afb11933863eec6c929a38cd5f7c89c181f267ac92151e7d68ac0bb"
	I1122 00:20:47.799284  271651 cri.go:89] found id: "190bb0852270abcf17fda286c6be5e9fcb36eb2b98dcf07cf71fa2985c5db26b"
	I1122 00:20:47.799289  271651 cri.go:89] found id: ""
	I1122 00:20:47.799343  271651 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1122 00:20:47.828549  271651 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437","pid":858,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437/rootfs","created":"2025-11-22T00:20:47.664964359Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-781232_311cfc4ebe5dbfb8c158af5da75e855b","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"311cfc4ebe5dbfb8c158af5da75e855b"},"owner":"root"},{"ociVersion":"1.2.1","id":"35b440e09c5a86db6c6dce68737e0b37e0d9302f51ce5fd729ac86a23bae6714","pid":968,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b440e09c5a86db6c6dce68737e0b37e0d9302f51ce5fd729ac86a23bae6714","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35b440e09c5a86db6c6dce68737e0b37e0d9302f51ce5fd729ac86a23bae6714/rootfs","created":"2025-11-22T00:20:47.804770594Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-781232","io.kubernetes.cri.sandbox-nam
espace":"kube-system","io.kubernetes.cri.sandbox-uid":"b6660e44a79de4c519af19191b40ac51"},"owner":"root"},{"ociVersion":"1.2.1","id":"6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2","pid":829,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2/rootfs","created":"2025-11-22T00:20:47.655708991Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-781232_b6660e44a79de4c519a
f19191b40ac51","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b6660e44a79de4c519af19191b40ac51"},"owner":"root"},{"ociVersion":"1.2.1","id":"6ed8ae255eb270ce384b53a2cfa8af556d87314b9ef910c4ddf73b5057ba4cae","pid":865,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ed8ae255eb270ce384b53a2cfa8af556d87314b9ef910c4ddf73b5057ba4cae","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ed8ae255eb270ce384b53a2cfa8af556d87314b9ef910c4ddf73b5057ba4cae/rootfs","created":"2025-11-22T00:20:47.669557881Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"6ed8ae255eb270ce384b53a2cfa8af556d8
7314b9ef910c4ddf73b5057ba4cae","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-781232_0ea3925d850410c51c93e1eebc56436e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"0ea3925d850410c51c93e1eebc56436e"},"owner":"root"},{"ociVersion":"1.2.1","id":"a088ba754b8b52d8b3ef3947967041e2570a6e8d27ff5de86ee8d26a638e3aa9","pid":932,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a088ba754b8b52d8b3ef3947967041e2570a6e8d27ff5de86ee8d26a638e3aa9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a088ba754b8b52d8b3ef3947967041e2570a6e8d27ff5de86ee8d26a638e3aa9/rootfs","created":"2025-11-22T00:20:47.790822837Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0"
,"io.kubernetes.cri.sandbox-id":"f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5","io.kubernetes.cri.sandbox-name":"etcd-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2bde7d118300deb354bbf504cfa1dd64"},"owner":"root"},{"ociVersion":"1.2.1","id":"dd1c7227c8c4e2bf1ae891b410d84e72536ab5c3bf5218fa25044e4b2849b261","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd1c7227c8c4e2bf1ae891b410d84e72536ab5c3bf5218fa25044e4b2849b261","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd1c7227c8c4e2bf1ae891b410d84e72536ab5c3bf5218fa25044e4b2849b261/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437","io.kubernetes.cri.sandbox-n
ame":"kube-apiserver-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"311cfc4ebe5dbfb8c158af5da75e855b"},"owner":"root"},{"ociVersion":"1.2.1","id":"e5449caa295e69e90453980f9e9bb8cca5858a385c302dd4f9a74d2514f50118","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5449caa295e69e90453980f9e9bb8cca5858a385c302dd4f9a74d2514f50118","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5449caa295e69e90453980f9e9bb8cca5858a385c302dd4f9a74d2514f50118/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"6ed8ae255eb270ce384b53a2cfa8af556d87314b9ef910c4ddf73b5057ba4cae","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system
","io.kubernetes.cri.sandbox-uid":"0ea3925d850410c51c93e1eebc56436e"},"owner":"root"},{"ociVersion":"1.2.1","id":"f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5","pid":849,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5/rootfs","created":"2025-11-22T00:20:47.658838122Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-781232_2bde7d118300deb354bbf504cfa1dd64","io.kubernetes.
cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-no-preload-781232","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"2bde7d118300deb354bbf504cfa1dd64"},"owner":"root"}]
	I1122 00:20:47.828773  271651 cri.go:126] list returned 8 containers
	I1122 00:20:47.828789  271651 cri.go:129] container: {ID:0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437 Status:running}
	I1122 00:20:47.828832  271651 cri.go:131] skipping 0cf2ef9c224764f540097675043703c9a44c2537f156ccff6692ff1d85afe437 - not in ps
	I1122 00:20:47.828844  271651 cri.go:129] container: {ID:35b440e09c5a86db6c6dce68737e0b37e0d9302f51ce5fd729ac86a23bae6714 Status:created}
	I1122 00:20:47.828855  271651 cri.go:135] skipping {35b440e09c5a86db6c6dce68737e0b37e0d9302f51ce5fd729ac86a23bae6714 created}: state = "created", want "paused"
	I1122 00:20:47.828870  271651 cri.go:129] container: {ID:6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2 Status:running}
	I1122 00:20:47.828878  271651 cri.go:131] skipping 6d38af15103c7db75748f98b5c0b6d021b358fbd9a7ae309bdac027ed97eccd2 - not in ps
	I1122 00:20:47.828889  271651 cri.go:129] container: {ID:6ed8ae255eb270ce384b53a2cfa8af556d87314b9ef910c4ddf73b5057ba4cae Status:running}
	I1122 00:20:47.828896  271651 cri.go:131] skipping 6ed8ae255eb270ce384b53a2cfa8af556d87314b9ef910c4ddf73b5057ba4cae - not in ps
	I1122 00:20:47.828907  271651 cri.go:129] container: {ID:a088ba754b8b52d8b3ef3947967041e2570a6e8d27ff5de86ee8d26a638e3aa9 Status:created}
	I1122 00:20:47.828916  271651 cri.go:135] skipping {a088ba754b8b52d8b3ef3947967041e2570a6e8d27ff5de86ee8d26a638e3aa9 created}: state = "created", want "paused"
	I1122 00:20:47.828929  271651 cri.go:129] container: {ID:dd1c7227c8c4e2bf1ae891b410d84e72536ab5c3bf5218fa25044e4b2849b261 Status:stopped}
	I1122 00:20:47.828938  271651 cri.go:135] skipping {dd1c7227c8c4e2bf1ae891b410d84e72536ab5c3bf5218fa25044e4b2849b261 stopped}: state = "stopped", want "paused"
	I1122 00:20:47.828954  271651 cri.go:129] container: {ID:e5449caa295e69e90453980f9e9bb8cca5858a385c302dd4f9a74d2514f50118 Status:stopped}
	I1122 00:20:47.828966  271651 cri.go:135] skipping {e5449caa295e69e90453980f9e9bb8cca5858a385c302dd4f9a74d2514f50118 stopped}: state = "stopped", want "paused"
	I1122 00:20:47.828976  271651 cri.go:129] container: {ID:f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5 Status:running}
	I1122 00:20:47.828986  271651 cri.go:131] skipping f19abd08427cefe6869fd03c704a57af65b1ae617ff80c086f59d4b339a24ac5 - not in ps
	I1122 00:20:47.829046  271651 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:20:47.841076  271651 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:20:47.841097  271651 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:20:47.841145  271651 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:20:47.855332  271651 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:20:47.856667  271651 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-781232" does not appear in /home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:20:47.857597  271651 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-9059/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-781232" cluster setting kubeconfig missing "no-preload-781232" context setting]
	I1122 00:20:47.858995  271651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/kubeconfig: {Name:mk1de43c606bf9b357397ed899e71eb19bad0265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:47.861431  271651 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:20:47.873388  271651 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1122 00:20:47.873445  271651 kubeadm.go:602] duration metric: took 32.341557ms to restartPrimaryControlPlane
	I1122 00:20:47.873464  271651 kubeadm.go:403] duration metric: took 118.736228ms to StartCluster
	I1122 00:20:47.873485  271651 settings.go:142] acquiring lock: {Name:mk1d60582df8b538e3c57bd1424924e717e0072a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:47.873577  271651 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:20:47.876108  271651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/kubeconfig: {Name:mk1de43c606bf9b357397ed899e71eb19bad0265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:20:47.876485  271651 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:20:47.876636  271651 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:20:47.877236  271651 addons.go:70] Setting dashboard=true in profile "no-preload-781232"
	I1122 00:20:47.877267  271651 addons.go:239] Setting addon dashboard=true in "no-preload-781232"
	W1122 00:20:47.877275  271651 addons.go:248] addon dashboard should already be in state true
	I1122 00:20:47.877305  271651 host.go:66] Checking if "no-preload-781232" exists ...
	I1122 00:20:47.877817  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:47.876776  271651 config.go:182] Loaded profile config "no-preload-781232": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:20:47.878124  271651 addons.go:70] Setting default-storageclass=true in profile "no-preload-781232"
	I1122 00:20:47.878143  271651 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-781232"
	I1122 00:20:47.878468  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:47.878642  271651 addons.go:70] Setting storage-provisioner=true in profile "no-preload-781232"
	I1122 00:20:47.878661  271651 addons.go:239] Setting addon storage-provisioner=true in "no-preload-781232"
	W1122 00:20:47.878670  271651 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:20:47.878699  271651 host.go:66] Checking if "no-preload-781232" exists ...
	I1122 00:20:47.879157  271651 addons.go:70] Setting metrics-server=true in profile "no-preload-781232"
	I1122 00:20:47.879176  271651 addons.go:239] Setting addon metrics-server=true in "no-preload-781232"
	W1122 00:20:47.879184  271651 addons.go:248] addon metrics-server should already be in state true
	I1122 00:20:47.879209  271651 host.go:66] Checking if "no-preload-781232" exists ...
	I1122 00:20:47.879329  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:47.879791  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:47.883756  271651 out.go:179] * Verifying Kubernetes components...
	I1122 00:20:47.885139  271651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:20:47.911440  271651 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:20:47.911867  271651 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:20:47.913400  271651 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:20:47.913472  271651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:20:47.913449  271651 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:20:47.913792  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:47.915108  271651 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1122 00:20:47.915136  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:20:47.915162  271651 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:20:47.915225  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:47.916406  271651 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1122 00:20:47.916427  271651 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1122 00:20:47.916494  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:47.918308  271651 addons.go:239] Setting addon default-storageclass=true in "no-preload-781232"
	W1122 00:20:47.918331  271651 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:20:47.918361  271651 host.go:66] Checking if "no-preload-781232" exists ...
	I1122 00:20:47.918979  271651 cli_runner.go:164] Run: docker container inspect no-preload-781232 --format={{.State.Status}}
	I1122 00:20:47.940025  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:47.948359  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:47.948788  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:47.955313  271651 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:20:47.955337  271651 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:20:47.955392  271651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-781232
	I1122 00:20:47.985983  271651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/no-preload-781232/id_rsa Username:docker}
	I1122 00:20:48.060916  271651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:20:48.066022  271651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:20:48.071728  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:20:48.071752  271651 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:20:48.074813  271651 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1122 00:20:48.074835  271651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1122 00:20:48.092826  271651 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1122 00:20:48.092855  271651 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1122 00:20:48.093244  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:20:48.093303  271651 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:20:48.101409  271651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:20:48.111335  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:20:48.111363  271651 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:20:48.112088  271651 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1122 00:20:48.112108  271651 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1122 00:20:48.131478  271651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1122 00:20:48.133255  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:20:48.133299  271651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:20:48.153747  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:20:48.153859  271651 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:20:48.171501  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:20:48.171544  271651 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:20:48.194062  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:20:48.194089  271651 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:20:48.211682  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:20:48.211713  271651 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:20:48.225739  271651 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:20:48.225765  271651 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:20:48.239477  271651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:20:50.102298  271651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.041269787s)
	I1122 00:20:50.102380  271651 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.036325377s)
	I1122 00:20:50.102435  271651 node_ready.go:35] waiting up to 6m0s for node "no-preload-781232" to be "Ready" ...
	I1122 00:20:50.102485  271651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.001041352s)
	I1122 00:20:50.102588  271651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.971077401s)
	I1122 00:20:50.102614  271651 addons.go:495] Verifying addon metrics-server=true in "no-preload-781232"
	I1122 00:20:50.102742  271651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.863224744s)
	I1122 00:20:50.104446  271651 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-781232 addons enable metrics-server
	
	I1122 00:20:50.111974  271651 node_ready.go:49] node "no-preload-781232" is "Ready"
	I1122 00:20:50.112013  271651 node_ready.go:38] duration metric: took 9.530547ms for node "no-preload-781232" to be "Ready" ...
	I1122 00:20:50.112029  271651 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:20:50.112071  271651 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:20:50.120338  271651 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1122 00:20:50.121455  271651 addons.go:530] duration metric: took 2.244826496s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1122 00:20:50.125729  271651 api_server.go:72] duration metric: took 2.248867678s to wait for apiserver process to appear ...
	I1122 00:20:50.125753  271651 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:20:50.125775  271651 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1122 00:20:50.131451  271651 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:20:50.131481  271651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:20:50.626769  271651 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1122 00:20:50.633861  271651 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:20:50.633896  271651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:20:48.132085  269458 pod_ready.go:104] pod "coredns-5dd5756b68-pqbfp" is not "Ready", error: <nil>
	W1122 00:20:50.132639  269458 pod_ready.go:104] pod "coredns-5dd5756b68-pqbfp" is not "Ready", error: <nil>
	I1122 00:20:49.288636  218693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:20:49.289177  218693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:20:49.289244  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:20:49.289331  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:20:49.318321  218693 cri.go:89] found id: "81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6"
	I1122 00:20:49.318342  218693 cri.go:89] found id: "2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:49.318346  218693 cri.go:89] found id: ""
	I1122 00:20:49.318354  218693 logs.go:282] 2 containers: [81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587]
	I1122 00:20:49.318404  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.322732  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.328495  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1122 00:20:49.328571  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:20:49.369571  218693 cri.go:89] found id: "ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:49.369602  218693 cri.go:89] found id: ""
	I1122 00:20:49.369614  218693 logs.go:282] 1 containers: [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7]
	I1122 00:20:49.369892  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.376434  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1122 00:20:49.376520  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:20:49.413883  218693 cri.go:89] found id: ""
	I1122 00:20:49.413916  218693 logs.go:282] 0 containers: []
	W1122 00:20:49.413930  218693 logs.go:284] No container was found matching "coredns"
	I1122 00:20:49.413938  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:20:49.414015  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:20:49.458541  218693 cri.go:89] found id: "8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:49.458567  218693 cri.go:89] found id: "b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:49.458579  218693 cri.go:89] found id: ""
	I1122 00:20:49.458602  218693 logs.go:282] 2 containers: [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2]
	I1122 00:20:49.458682  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.465401  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.472015  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:20:49.472158  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:20:49.518511  218693 cri.go:89] found id: ""
	I1122 00:20:49.518560  218693 logs.go:282] 0 containers: []
	W1122 00:20:49.518573  218693 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:20:49.518583  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:20:49.518662  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:20:49.557146  218693 cri.go:89] found id: "718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:49.557173  218693 cri.go:89] found id: "13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	I1122 00:20:49.557177  218693 cri.go:89] found id: ""
	I1122 00:20:49.557197  218693 logs.go:282] 2 containers: [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216]
	I1122 00:20:49.557298  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.563058  218693 ssh_runner.go:195] Run: which crictl
	I1122 00:20:49.568033  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1122 00:20:49.568107  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:20:49.601346  218693 cri.go:89] found id: ""
	I1122 00:20:49.601493  218693 logs.go:282] 0 containers: []
	W1122 00:20:49.601509  218693 logs.go:284] No container was found matching "kindnet"
	I1122 00:20:49.601519  218693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:20:49.601687  218693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:20:49.640917  218693 cri.go:89] found id: ""
	I1122 00:20:49.640948  218693 logs.go:282] 0 containers: []
	W1122 00:20:49.640961  218693 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:20:49.640973  218693 logs.go:123] Gathering logs for kubelet ...
	I1122 00:20:49.640988  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:20:49.777443  218693 logs.go:123] Gathering logs for kube-apiserver [81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6] ...
	I1122 00:20:49.777485  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 81142e7ed280b3e7a5cc59a091de86e66620f3841486d40ef51f3c845bf698f6"
	I1122 00:20:49.821731  218693 logs.go:123] Gathering logs for kube-apiserver [2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587] ...
	I1122 00:20:49.821777  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 2e3aaa0d96c2cc9d110b994e3df108e0a78b3e80dae0dc52febf87cbdd528587"
	I1122 00:20:49.867159  218693 logs.go:123] Gathering logs for etcd [ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7] ...
	I1122 00:20:49.867208  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ce556a5394180410f0cc434955d664ca0f25f8999150ddb0c902378b8f0ec7b7"
	I1122 00:20:49.911762  218693 logs.go:123] Gathering logs for kube-scheduler [b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2] ...
	I1122 00:20:49.911806  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b072fb61e5e8d9fb0c450f6123a51d4ba86ee5162b4c0378a606893fb26410b2"
	I1122 00:20:49.957831  218693 logs.go:123] Gathering logs for kube-controller-manager [718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f] ...
	I1122 00:20:49.957870  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 718cee7bdfa69692cd108ab474d6c47072a742bba7d951af9b6f411c1897835f"
	I1122 00:20:49.994873  218693 logs.go:123] Gathering logs for containerd ...
	I1122 00:20:49.994908  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1122 00:20:50.052408  218693 logs.go:123] Gathering logs for container status ...
	I1122 00:20:50.052446  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:20:50.089867  218693 logs.go:123] Gathering logs for dmesg ...
	I1122 00:20:50.089903  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:20:50.104729  218693 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:20:50.104756  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:20:50.186784  218693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:20:50.186805  218693 logs.go:123] Gathering logs for kube-scheduler [8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78] ...
	I1122 00:20:50.186820  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 8cb86218489f4e4ea496e1edbb81a5bfa9657517fe1f841ab1f262169880ef78"
	I1122 00:20:50.251790  218693 logs.go:123] Gathering logs for kube-controller-manager [13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216] ...
	I1122 00:20:50.251823  218693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 13facf83677f37e3b97292f0f2dc164096fcd8cff5e71fdcf0ad085e3602f216"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	461c148b52a86       56cc512116c8f       9 seconds ago       Running             busybox                   0                   58abf3cc4f7ef       busybox                                      default
	20a5c049d6f88       52546a367cc9e       15 seconds ago      Running             coredns                   0                   79a1c44c38dd2       coredns-66bc5c9577-k2k88                     kube-system
	fd511a6c62f69       6e38f40d628db       15 seconds ago      Running             storage-provisioner       0                   83f30bb381301       storage-provisioner                          kube-system
	e6438a8988dc0       409467f978b4a       26 seconds ago      Running             kindnet-cni               0                   58b068d5a59b0       kindnet-hv86p                                kube-system
	1b4ec96a638d6       fc25172553d79       26 seconds ago      Running             kube-proxy                0                   a5e9cb99d1f8b       kube-proxy-k9lgv                             kube-system
	743ad186a2850       c80c8dbafe7dd       36 seconds ago      Running             kube-controller-manager   0                   c2e39023d6150       kube-controller-manager-embed-certs-491677   kube-system
	7adf72f95bd8d       7dd6aaa1717ab       36 seconds ago      Running             kube-scheduler            0                   23f5d5b5da6a7       kube-scheduler-embed-certs-491677            kube-system
	3cb363d9d975d       5f1f5298c888d       36 seconds ago      Running             etcd                      0                   cc92e0c1ed96a       etcd-embed-certs-491677                      kube-system
	1fdd4e0b2d3b9       c3994bc696102       36 seconds ago      Running             kube-apiserver            0                   ba6a63941e3c1       kube-apiserver-embed-certs-491677            kube-system
	
	
	==> containerd <==
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.377782372Z" level=info msg="Container 20a5c049d6f880fd6113ea2bf02b866cf8da4de064a04a9dd0ead4aeb01f3296: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.380843968Z" level=info msg="CreateContainer within sandbox \"83f30bb381301870530d114dfd5080ee46d1a31476c1dba9cbd5e7d03331de1f\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"fd511a6c62f69e24313a3290e3f0e02acd3ce88feb2c9e7fe7296730e88cb3e4\""
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.381584644Z" level=info msg="StartContainer for \"fd511a6c62f69e24313a3290e3f0e02acd3ce88feb2c9e7fe7296730e88cb3e4\""
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.382598255Z" level=info msg="connecting to shim fd511a6c62f69e24313a3290e3f0e02acd3ce88feb2c9e7fe7296730e88cb3e4" address="unix:///run/containerd/s/1a5a6dfb3ec99bc05b7590ae40227f5c2f88254c47dbf8a11fc8ea58060c0391" protocol=ttrpc version=3
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.385156023Z" level=info msg="CreateContainer within sandbox \"79a1c44c38dd21a966c16984a660b22a05e77cce180b95c34134469f42ee439d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20a5c049d6f880fd6113ea2bf02b866cf8da4de064a04a9dd0ead4aeb01f3296\""
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.386001792Z" level=info msg="StartContainer for \"20a5c049d6f880fd6113ea2bf02b866cf8da4de064a04a9dd0ead4aeb01f3296\""
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.388074624Z" level=info msg="connecting to shim 20a5c049d6f880fd6113ea2bf02b866cf8da4de064a04a9dd0ead4aeb01f3296" address="unix:///run/containerd/s/da00876605a114c60d67b77308aeb878980f80f09c41784882e4d0d420d77766" protocol=ttrpc version=3
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.445770515Z" level=info msg="StartContainer for \"fd511a6c62f69e24313a3290e3f0e02acd3ce88feb2c9e7fe7296730e88cb3e4\" returns successfully"
	Nov 22 00:20:40 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:40.458981550Z" level=info msg="StartContainer for \"20a5c049d6f880fd6113ea2bf02b866cf8da4de064a04a9dd0ead4aeb01f3296\" returns successfully"
	Nov 22 00:20:43 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:43.614864265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:7f94d7ba-76b7-4739-b7a9-81d27936e10f,Namespace:default,Attempt:0,}"
	Nov 22 00:20:43 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:43.658799501Z" level=info msg="connecting to shim 58abf3cc4f7ef81b41dcf8ee3004ce41e46ee04fa9726fe380e0c1f2c09f24bf" address="unix:///run/containerd/s/6cf216f1c5410c0243f2d47b66cd95293341038a3bb812d89f2d4f81d269e558" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:20:43 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:43.729905426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:7f94d7ba-76b7-4739-b7a9-81d27936e10f,Namespace:default,Attempt:0,} returns sandbox id \"58abf3cc4f7ef81b41dcf8ee3004ce41e46ee04fa9726fe380e0c1f2c09f24bf\""
	Nov 22 00:20:43 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:43.731835312Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.796345590Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.797112964Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.798253668Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.800058419Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.800671869Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.068789592s"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.800713412Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.804822499Z" level=info msg="CreateContainer within sandbox \"58abf3cc4f7ef81b41dcf8ee3004ce41e46ee04fa9726fe380e0c1f2c09f24bf\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.812489479Z" level=info msg="Container 461c148b52a8640242a54a0d4a15fcc93593178785f39a223711c44bc9715ad5: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.818612726Z" level=info msg="CreateContainer within sandbox \"58abf3cc4f7ef81b41dcf8ee3004ce41e46ee04fa9726fe380e0c1f2c09f24bf\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"461c148b52a8640242a54a0d4a15fcc93593178785f39a223711c44bc9715ad5\""
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.819244864Z" level=info msg="StartContainer for \"461c148b52a8640242a54a0d4a15fcc93593178785f39a223711c44bc9715ad5\""
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.820292452Z" level=info msg="connecting to shim 461c148b52a8640242a54a0d4a15fcc93593178785f39a223711c44bc9715ad5" address="unix:///run/containerd/s/6cf216f1c5410c0243f2d47b66cd95293341038a3bb812d89f2d4f81d269e558" protocol=ttrpc version=3
	Nov 22 00:20:45 embed-certs-491677 containerd[662]: time="2025-11-22T00:20:45.877546924Z" level=info msg="StartContainer for \"461c148b52a8640242a54a0d4a15fcc93593178785f39a223711c44bc9715ad5\" returns successfully"
	
	
	==> coredns [20a5c049d6f880fd6113ea2bf02b866cf8da4de064a04a9dd0ead4aeb01f3296] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40650 - 41397 "HINFO IN 5607469831847770391.4605121898837800457. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018921329s
	
	
	==> describe nodes <==
	Name:               embed-certs-491677
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-491677
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=embed-certs-491677
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_20_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:20:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-491677
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:20:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:20:53 +0000   Sat, 22 Nov 2025 00:20:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:20:53 +0000   Sat, 22 Nov 2025 00:20:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:20:53 +0000   Sat, 22 Nov 2025 00:20:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:20:53 +0000   Sat, 22 Nov 2025 00:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-491677
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                e204dac3-e20c-470b-b0cf-5f5980ede5c3
	  Boot ID:                    725aae03-f893-4e0b-b029-cbd3b00ccfdd
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-k2k88                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-embed-certs-491677                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-hv86p                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-embed-certs-491677             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-491677    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-k9lgv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-embed-certs-491677             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node embed-certs-491677 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node embed-certs-491677 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node embed-certs-491677 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node embed-certs-491677 event: Registered Node embed-certs-491677 in Controller
	  Normal  NodeReady                16s   kubelet          Node embed-certs-491677 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000865] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.410276] i8042: Warning: Keylock active
	[  +0.014947] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.495836] block sda: the capability attribute has been deprecated.
	[  +0.091740] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024333] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.452540] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [3cb363d9d975d59f674c73a72da2871248f5e1d8e260a96c1b2f8a02162d4326] <==
	{"level":"warn","ts":"2025-11-22T00:20:19.863568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.874813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.889608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.906420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.912789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.920152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.926881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.933717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.939824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.947537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.957376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.964635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.972020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.979916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.987558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:19.995293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.002056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.009726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.016540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.025001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.033624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.040793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.061234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.068370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:20:20.075776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33994","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:20:55 up  1:03,  0 user,  load average: 3.99, 3.47, 2.26
	Linux embed-certs-491677 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e6438a8988dc0f2e029e4fc1850eb99d1df097af6284354717b03557d9cf0e41] <==
	I1122 00:20:29.593761       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:20:29.594049       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:20:29.594206       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:20:29.594227       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:20:29.594293       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:20:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:20:29.795135       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:20:29.795177       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:20:29.795188       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:20:29.795400       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:20:30.291123       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:20:30.291153       1 metrics.go:72] Registering metrics
	I1122 00:20:30.291425       1 controller.go:711] "Syncing nftables rules"
	I1122 00:20:39.795673       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:20:39.795779       1 main.go:301] handling current node
	I1122 00:20:49.797449       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:20:49.797491       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1fdd4e0b2d3b945bcac84434220e89165d8896cdf19ffa5097bcc810d6f432fd] <==
	I1122 00:20:20.629687       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:20:20.632016       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:20:20.632023       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:20:20.637972       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:20:20.638087       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:20:20.674887       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:20:20.676894       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:20:21.533297       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:20:21.538200       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:20:21.538223       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:20:22.064334       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:20:22.106684       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:20:22.236984       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:20:22.243146       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:20:22.244297       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:20:22.248726       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:20:22.584108       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:20:23.413393       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:20:23.424460       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:20:23.434327       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:20:28.288895       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:20:28.296373       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:20:28.387332       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:20:28.436558       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1122 00:20:52.423994       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:35440: use of closed network connection
	
	
	==> kube-controller-manager [743ad186a28504b85641bc291d2966e934eca74995c9788d8acf80ce552cc12d] <==
	I1122 00:20:27.582933       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:20:27.582949       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:20:27.582979       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:20:27.583013       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:20:27.583053       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:20:27.583106       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:20:27.583121       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:20:27.583108       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:20:27.583071       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:20:27.583320       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:20:27.583341       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:20:27.583359       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:20:27.583372       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:20:27.583475       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:20:27.584240       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:20:27.584317       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:20:27.585297       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:20:27.587505       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:20:27.589771       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:20:27.589828       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:20:27.593071       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:20:27.595425       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:20:27.602716       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:20:27.606230       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:20:42.535417       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1b4ec96a638d6d78b8ac0f347162e86a708a0e53df8de231ca44c7eee2b08994] <==
	I1122 00:20:29.056133       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:20:29.131229       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:20:29.231660       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:20:29.231708       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:20:29.231831       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:20:29.311742       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:20:29.311826       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:20:29.317674       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:20:29.318116       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:20:29.318146       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:20:29.319485       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:20:29.319553       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:20:29.319530       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:20:29.319918       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:20:29.319527       1 config.go:200] "Starting service config controller"
	I1122 00:20:29.320124       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:20:29.319542       1 config.go:309] "Starting node config controller"
	I1122 00:20:29.320194       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:20:29.320203       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:20:29.420003       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:20:29.420815       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:20:29.420830       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7adf72f95bd8de9a99b3de1a9c91e0f10ca82b21b87f0ab404554319ad825707] <==
	E1122 00:20:20.606091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:20:20.606115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:20:20.606122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:20:20.606299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:20:20.606323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:20:20.606350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:20:20.606500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:20:20.606579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:20:20.606580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:20:20.606640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:20:20.606651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:20:20.606680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:20:21.500830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:20:21.505253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:20:21.529651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:20:21.531636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:20:21.602153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:20:21.619369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:20:21.766134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:20:21.796321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:20:21.837941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:20:21.846233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:20:21.864355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:20:21.889683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1122 00:20:22.200985       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:20:24 embed-certs-491677 kubelet[1434]: I1122 00:20:24.329818    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-491677" podStartSLOduration=3.329792314 podStartE2EDuration="3.329792314s" podCreationTimestamp="2025-11-22 00:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:24.319085166 +0000 UTC m=+1.153308791" watchObservedRunningTime="2025-11-22 00:20:24.329792314 +0000 UTC m=+1.164015921"
	Nov 22 00:20:24 embed-certs-491677 kubelet[1434]: I1122 00:20:24.329981    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-491677" podStartSLOduration=2.329969615 podStartE2EDuration="2.329969615s" podCreationTimestamp="2025-11-22 00:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:24.329911547 +0000 UTC m=+1.164135171" watchObservedRunningTime="2025-11-22 00:20:24.329969615 +0000 UTC m=+1.164193236"
	Nov 22 00:20:24 embed-certs-491677 kubelet[1434]: I1122 00:20:24.353590    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-491677" podStartSLOduration=1.3535689020000001 podStartE2EDuration="1.353568902s" podCreationTimestamp="2025-11-22 00:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:24.342483146 +0000 UTC m=+1.176706769" watchObservedRunningTime="2025-11-22 00:20:24.353568902 +0000 UTC m=+1.187792527"
	Nov 22 00:20:24 embed-certs-491677 kubelet[1434]: I1122 00:20:24.365825    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-491677" podStartSLOduration=1.365802228 podStartE2EDuration="1.365802228s" podCreationTimestamp="2025-11-22 00:20:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:24.353431294 +0000 UTC m=+1.187654938" watchObservedRunningTime="2025-11-22 00:20:24.365802228 +0000 UTC m=+1.200025852"
	Nov 22 00:20:27 embed-certs-491677 kubelet[1434]: I1122 00:20:27.601100    1434 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:20:27 embed-certs-491677 kubelet[1434]: I1122 00:20:27.601772    1434 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470464    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aa71cc32-b446-45a5-b379-0bb74ac111be-kube-proxy\") pod \"kube-proxy-k9lgv\" (UID: \"aa71cc32-b446-45a5-b379-0bb74ac111be\") " pod="kube-system/kube-proxy-k9lgv"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470519    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa71cc32-b446-45a5-b379-0bb74ac111be-xtables-lock\") pod \"kube-proxy-k9lgv\" (UID: \"aa71cc32-b446-45a5-b379-0bb74ac111be\") " pod="kube-system/kube-proxy-k9lgv"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470553    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfnb6\" (UniqueName: \"kubernetes.io/projected/aa71cc32-b446-45a5-b379-0bb74ac111be-kube-api-access-lfnb6\") pod \"kube-proxy-k9lgv\" (UID: \"aa71cc32-b446-45a5-b379-0bb74ac111be\") " pod="kube-system/kube-proxy-k9lgv"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470580    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6231b935-f44b-4e7b-a240-287c22f9547b-xtables-lock\") pod \"kindnet-hv86p\" (UID: \"6231b935-f44b-4e7b-a240-287c22f9547b\") " pod="kube-system/kindnet-hv86p"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470606    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl6v4\" (UniqueName: \"kubernetes.io/projected/6231b935-f44b-4e7b-a240-287c22f9547b-kube-api-access-sl6v4\") pod \"kindnet-hv86p\" (UID: \"6231b935-f44b-4e7b-a240-287c22f9547b\") " pod="kube-system/kindnet-hv86p"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470679    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa71cc32-b446-45a5-b379-0bb74ac111be-lib-modules\") pod \"kube-proxy-k9lgv\" (UID: \"aa71cc32-b446-45a5-b379-0bb74ac111be\") " pod="kube-system/kube-proxy-k9lgv"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470768    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6231b935-f44b-4e7b-a240-287c22f9547b-cni-cfg\") pod \"kindnet-hv86p\" (UID: \"6231b935-f44b-4e7b-a240-287c22f9547b\") " pod="kube-system/kindnet-hv86p"
	Nov 22 00:20:28 embed-certs-491677 kubelet[1434]: I1122 00:20:28.470816    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6231b935-f44b-4e7b-a240-287c22f9547b-lib-modules\") pod \"kindnet-hv86p\" (UID: \"6231b935-f44b-4e7b-a240-287c22f9547b\") " pod="kube-system/kindnet-hv86p"
	Nov 22 00:20:30 embed-certs-491677 kubelet[1434]: I1122 00:20:30.303829    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k9lgv" podStartSLOduration=2.303805379 podStartE2EDuration="2.303805379s" podCreationTimestamp="2025-11-22 00:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:29.300211616 +0000 UTC m=+6.134435242" watchObservedRunningTime="2025-11-22 00:20:30.303805379 +0000 UTC m=+7.138029006"
	Nov 22 00:20:30 embed-certs-491677 kubelet[1434]: I1122 00:20:30.303958    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hv86p" podStartSLOduration=2.30395145 podStartE2EDuration="2.30395145s" podCreationTimestamp="2025-11-22 00:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:30.303946832 +0000 UTC m=+7.138170454" watchObservedRunningTime="2025-11-22 00:20:30.30395145 +0000 UTC m=+7.138175087"
	Nov 22 00:20:39 embed-certs-491677 kubelet[1434]: I1122 00:20:39.876975    1434 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:20:39 embed-certs-491677 kubelet[1434]: I1122 00:20:39.960861    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59l8z\" (UniqueName: \"kubernetes.io/projected/957a225b-f96e-47aa-aea3-a77ff5b7843c-kube-api-access-59l8z\") pod \"storage-provisioner\" (UID: \"957a225b-f96e-47aa-aea3-a77ff5b7843c\") " pod="kube-system/storage-provisioner"
	Nov 22 00:20:39 embed-certs-491677 kubelet[1434]: I1122 00:20:39.960914    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5170ce81-2d67-4775-9d3e-7ba7d5b37f03-config-volume\") pod \"coredns-66bc5c9577-k2k88\" (UID: \"5170ce81-2d67-4775-9d3e-7ba7d5b37f03\") " pod="kube-system/coredns-66bc5c9577-k2k88"
	Nov 22 00:20:39 embed-certs-491677 kubelet[1434]: I1122 00:20:39.960940    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/957a225b-f96e-47aa-aea3-a77ff5b7843c-tmp\") pod \"storage-provisioner\" (UID: \"957a225b-f96e-47aa-aea3-a77ff5b7843c\") " pod="kube-system/storage-provisioner"
	Nov 22 00:20:39 embed-certs-491677 kubelet[1434]: I1122 00:20:39.960954    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb4wk\" (UniqueName: \"kubernetes.io/projected/5170ce81-2d67-4775-9d3e-7ba7d5b37f03-kube-api-access-nb4wk\") pod \"coredns-66bc5c9577-k2k88\" (UID: \"5170ce81-2d67-4775-9d3e-7ba7d5b37f03\") " pod="kube-system/coredns-66bc5c9577-k2k88"
	Nov 22 00:20:41 embed-certs-491677 kubelet[1434]: I1122 00:20:41.397951    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k2k88" podStartSLOduration=13.397927104 podStartE2EDuration="13.397927104s" podCreationTimestamp="2025-11-22 00:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:41.37628932 +0000 UTC m=+18.210512945" watchObservedRunningTime="2025-11-22 00:20:41.397927104 +0000 UTC m=+18.232150728"
	Nov 22 00:20:41 embed-certs-491677 kubelet[1434]: I1122 00:20:41.437030    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.433241818 podStartE2EDuration="12.433241818s" podCreationTimestamp="2025-11-22 00:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:20:41.401045761 +0000 UTC m=+18.235269385" watchObservedRunningTime="2025-11-22 00:20:41.433241818 +0000 UTC m=+18.267465442"
	Nov 22 00:20:43 embed-certs-491677 kubelet[1434]: I1122 00:20:43.389881    1434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssb89\" (UniqueName: \"kubernetes.io/projected/7f94d7ba-76b7-4739-b7a9-81d27936e10f-kube-api-access-ssb89\") pod \"busybox\" (UID: \"7f94d7ba-76b7-4739-b7a9-81d27936e10f\") " pod="default/busybox"
	Nov 22 00:20:46 embed-certs-491677 kubelet[1434]: I1122 00:20:46.372229    1434 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.3019412049999999 podStartE2EDuration="3.372205905s" podCreationTimestamp="2025-11-22 00:20:43 +0000 UTC" firstStartedPulling="2025-11-22 00:20:43.731373177 +0000 UTC m=+20.565596780" lastFinishedPulling="2025-11-22 00:20:45.801637864 +0000 UTC m=+22.635861480" observedRunningTime="2025-11-22 00:20:46.371900946 +0000 UTC m=+23.206124571" watchObservedRunningTime="2025-11-22 00:20:46.372205905 +0000 UTC m=+23.206429528"
	
	
	==> storage-provisioner [fd511a6c62f69e24313a3290e3f0e02acd3ce88feb2c9e7fe7296730e88cb3e4] <==
	I1122 00:20:40.465734       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:20:40.503342       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:20:40.503400       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:20:40.525548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:40.535166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:20:40.536556       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:20:40.536736       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-491677_54c74ed4-a9d3-4c1a-a5bf-4458cd9dc8d2!
	I1122 00:20:40.544345       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b022146-00ff-4e08-8a06-2b5c1521d8c3", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-491677_54c74ed4-a9d3-4c1a-a5bf-4458cd9dc8d2 became leader
	W1122 00:20:40.552369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:40.560346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:20:40.642505       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-491677_54c74ed4-a9d3-4c1a-a5bf-4458cd9dc8d2!
	W1122 00:20:42.563894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:42.568233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:44.571281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:44.574930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:46.578883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:46.583703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:48.587499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:48.592458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:50.596971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:50.605925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:52.609331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:52.613214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:54.617113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:20:54.622048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-491677 -n embed-certs-491677
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-491677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (13.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-418191 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [62d761e0-90e8-4ae1-98f2-3a0febcc01d1] Pending
helpers_test.go:352: "busybox" [62d761e0-90e8-4ae1-98f2-3a0febcc01d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [62d761e0-90e8-4ae1-98f2-3a0febcc01d1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004346008s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-418191 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-418191
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-418191:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29",
	        "Created": "2025-11-22T00:21:39.243006846Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286501,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:21:39.282645044Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29/hosts",
	        "LogPath": "/var/lib/docker/containers/9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29/9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29-json.log",
	        "Name": "/default-k8s-diff-port-418191",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-418191:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-418191",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29",
	                "LowerDir": "/var/lib/docker/overlay2/9de1604b03553063dfcf170c343940a020addaeb0de7f808c0b4ef93cd42252a-init/diff:/var/lib/docker/overlay2/4b4af9a4e857911a6b5096aeeaee227ee7577c6eff3b08bbb4e765c49ed2fb70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9de1604b03553063dfcf170c343940a020addaeb0de7f808c0b4ef93cd42252a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9de1604b03553063dfcf170c343940a020addaeb0de7f808c0b4ef93cd42252a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9de1604b03553063dfcf170c343940a020addaeb0de7f808c0b4ef93cd42252a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-418191",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-418191/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-418191",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-418191",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-418191",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "399df505604566f4e6bbc893c23ad4cfed1ab125826174ac85943738d7cb9eb5",
	            "SandboxKey": "/var/run/docker/netns/399df5056045",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-418191": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d513e2ffd1d12a091cc59d5c7402ad8012293f8237487230adf0f25b7f341f2",
	                    "EndpointID": "91edfe978c2330eb32ac9a65b6eea7e1d51813ffd41fe66fe1fc72b496547f7a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3a:25:39:0a:49:d7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-418191",
	                        "9a27e7ed58ec"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-418191 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-418191 logs -n 25: (1.357787216s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ stop    │ -p newest-cni-401244 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-401244 │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-401244 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-401244 │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ start   │ -p newest-cni-401244 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-401244 │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ image   │ newest-cni-401244 image list --format=json                                                                                                                                                                                                          │ newest-cni-401244 │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ pause   │ -p newest-cni-401244 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-401244 │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ unpause │ -p newest-cni-401244 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-401244 │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ delete  │ -p newest-cni-401244                                                                                                                                                                                                                                │ newest-cni-401244 │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 pgrep -a kubelet                                                                                                                                                                                                                     │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ delete  │ -p newest-cni-401244                                                                                                                                                                                                                                │ newest-cni-401244 │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ start   │ -p calico-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd                                                                                                        │ calico-687868     │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │                     │
	│ ssh     │ -p auto-687868 sudo cat /etc/nsswitch.conf                                                                                                                                                                                                          │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo cat /etc/hosts                                                                                                                                                                                                                  │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo cat /etc/resolv.conf                                                                                                                                                                                                            │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo crictl pods                                                                                                                                                                                                                     │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo crictl ps --all                                                                                                                                                                                                                 │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                          │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo ip a s                                                                                                                                                                                                                          │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo ip r s                                                                                                                                                                                                                          │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo iptables-save                                                                                                                                                                                                                   │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo iptables -t nat -L -n -v                                                                                                                                                                                                        │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo systemctl status kubelet --all --full --no-pager                                                                                                                                                                                │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo systemctl cat kubelet --no-pager                                                                                                                                                                                                │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                                                                                                 │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:23 UTC │
	│ ssh     │ -p auto-687868 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                                │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │ 22 Nov 25 00:23 UTC │
	│ ssh     │ -p auto-687868 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                                │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:22:41
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:22:41.198696  307961 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:22:41.199013  307961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:22:41.199026  307961 out.go:374] Setting ErrFile to fd 2...
	I1122 00:22:41.199034  307961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:22:41.199351  307961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:22:41.199870  307961 out.go:368] Setting JSON to false
	I1122 00:22:41.201183  307961 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3900,"bootTime":1763767061,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:22:41.201274  307961 start.go:143] virtualization: kvm guest
	I1122 00:22:41.204209  307961 out.go:179] * [calico-687868] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:22:41.206278  307961 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:22:41.206269  307961 notify.go:221] Checking for updates...
	I1122 00:22:41.207610  307961 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:22:41.209161  307961 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:22:41.210719  307961 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1122 00:22:41.212423  307961 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:22:41.214119  307961 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:22:41.216446  307961 config.go:182] Loaded profile config "auto-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:22:41.216627  307961 config.go:182] Loaded profile config "default-k8s-diff-port-418191": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:22:41.216758  307961 config.go:182] Loaded profile config "kindnet-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:22:41.216877  307961 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:22:41.251704  307961 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:22:41.251819  307961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:22:41.327859  307961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:22:41.314830182 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:22:41.328008  307961 docker.go:319] overlay module found
	I1122 00:22:41.330242  307961 out.go:179] * Using the docker driver based on user configuration
	I1122 00:22:41.331527  307961 start.go:309] selected driver: docker
	I1122 00:22:41.331547  307961 start.go:930] validating driver "docker" against <nil>
	I1122 00:22:41.331564  307961 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:22:41.332436  307961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:22:41.420088  307961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:22:41.404588646 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:22:41.420914  307961 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:22:41.421273  307961 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:22:41.423335  307961 out.go:179] * Using Docker driver with root privileges
	I1122 00:22:41.424700  307961 cni.go:84] Creating CNI manager for "calico"
	I1122 00:22:41.424721  307961 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1122 00:22:41.424811  307961 start.go:353] cluster config:
	{Name:calico-687868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-687868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:22:41.429917  307961 out.go:179] * Starting "calico-687868" primary control-plane node in "calico-687868" cluster
	I1122 00:22:41.431320  307961 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:22:41.433303  307961 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:22:41.437495  307961 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:22:41.437549  307961 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1122 00:22:41.437562  307961 cache.go:65] Caching tarball of preloaded images
	I1122 00:22:41.437611  307961 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:22:41.437708  307961 preload.go:238] Found /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 00:22:41.437730  307961 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1122 00:22:41.437873  307961 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/config.json ...
	I1122 00:22:41.437904  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/config.json: {Name:mka7db926e97d6e5cdd43c81fe015b5df2c80b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:41.467913  307961 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:22:41.467940  307961 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:22:41.467963  307961 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:22:41.467995  307961 start.go:360] acquireMachinesLock for calico-687868: {Name:mke73cc1559133bd70447728d473e38271caed16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:22:41.468128  307961 start.go:364] duration metric: took 111.364µs to acquireMachinesLock for "calico-687868"
	I1122 00:22:41.468173  307961 start.go:93] Provisioning new machine with config: &{Name:calico-687868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-687868 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:22:41.468289  307961 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:22:40.482873  299730 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:22:40.487377  299730 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:22:40.487399  299730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:22:40.504169  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:22:40.748547  299730 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:22:40.748617  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:40.748804  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-687868 minikube.k8s.io/updated_at=2025_11_22T00_22_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=kindnet-687868 minikube.k8s.io/primary=true
	I1122 00:22:40.761806  299730 ops.go:34] apiserver oom_adj: -16
	I1122 00:22:40.830592  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:41.331384  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1122 00:22:38.641647  285133 node_ready.go:57] node "default-k8s-diff-port-418191" has "Ready":"False" status (will retry)
	W1122 00:22:41.139368  285133 node_ready.go:57] node "default-k8s-diff-port-418191" has "Ready":"False" status (will retry)
	W1122 00:22:43.139542  285133 node_ready.go:57] node "default-k8s-diff-port-418191" has "Ready":"False" status (will retry)
	I1122 00:22:41.831644  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:42.331497  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:42.831252  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:43.330665  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:43.831404  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:44.331503  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:44.830946  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:45.331531  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:45.405321  299730 kubeadm.go:1114] duration metric: took 4.656751714s to wait for elevateKubeSystemPrivileges
	I1122 00:22:45.405369  299730 kubeadm.go:403] duration metric: took 15.55089436s to StartCluster
	I1122 00:22:45.405393  299730 settings.go:142] acquiring lock: {Name:mk1d60582df8b538e3c57bd1424924e717e0072a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:45.405471  299730 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:22:45.407722  299730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/kubeconfig: {Name:mk1de43c606bf9b357397ed899e71eb19bad0265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:45.409646  299730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:22:45.409655  299730 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:22:45.409774  299730 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:22:45.409865  299730 config.go:182] Loaded profile config "kindnet-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:22:45.409880  299730 addons.go:70] Setting storage-provisioner=true in profile "kindnet-687868"
	I1122 00:22:45.409903  299730 addons.go:70] Setting default-storageclass=true in profile "kindnet-687868"
	I1122 00:22:45.409923  299730 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-687868"
	I1122 00:22:45.409906  299730 addons.go:239] Setting addon storage-provisioner=true in "kindnet-687868"
	I1122 00:22:45.410022  299730 host.go:66] Checking if "kindnet-687868" exists ...
	I1122 00:22:45.410328  299730 cli_runner.go:164] Run: docker container inspect kindnet-687868 --format={{.State.Status}}
	I1122 00:22:45.410623  299730 cli_runner.go:164] Run: docker container inspect kindnet-687868 --format={{.State.Status}}
	I1122 00:22:45.451555  299730 out.go:179] * Verifying Kubernetes components...
	I1122 00:22:45.453192  299730 addons.go:239] Setting addon default-storageclass=true in "kindnet-687868"
	I1122 00:22:45.453237  299730 host.go:66] Checking if "kindnet-687868" exists ...
	I1122 00:22:45.453573  299730 cli_runner.go:164] Run: docker container inspect kindnet-687868 --format={{.State.Status}}
	I1122 00:22:45.469177  299730 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:22:41.472052  307961 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:22:41.472364  307961 start.go:159] libmachine.API.Create for "calico-687868" (driver="docker")
	I1122 00:22:41.472403  307961 client.go:173] LocalClient.Create starting
	I1122 00:22:41.472492  307961 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem
	I1122 00:22:41.472530  307961 main.go:143] libmachine: Decoding PEM data...
	I1122 00:22:41.472549  307961 main.go:143] libmachine: Parsing certificate...
	I1122 00:22:41.472600  307961 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem
	I1122 00:22:41.472621  307961 main.go:143] libmachine: Decoding PEM data...
	I1122 00:22:41.472631  307961 main.go:143] libmachine: Parsing certificate...
	I1122 00:22:41.473138  307961 cli_runner.go:164] Run: docker network inspect calico-687868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:22:41.497060  307961 cli_runner.go:211] docker network inspect calico-687868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:22:41.497162  307961 network_create.go:284] running [docker network inspect calico-687868] to gather additional debugging logs...
	I1122 00:22:41.497183  307961 cli_runner.go:164] Run: docker network inspect calico-687868
	W1122 00:22:41.517795  307961 cli_runner.go:211] docker network inspect calico-687868 returned with exit code 1
	I1122 00:22:41.517844  307961 network_create.go:287] error running [docker network inspect calico-687868]: docker network inspect calico-687868: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-687868 not found
	I1122 00:22:41.517876  307961 network_create.go:289] output of [docker network inspect calico-687868]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-687868 not found
	
	** /stderr **
	I1122 00:22:41.517990  307961 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:22:41.540513  307961 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1df6c22ede91 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:c7:f4:a5:24:54} reservation:<nil>}
	I1122 00:22:41.541666  307961 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7d48551462a8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:3b:0e:74:ee:57} reservation:<nil>}
	I1122 00:22:41.542695  307961 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c50004b7f5b6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:73:1e:0d:b7:11} reservation:<nil>}
	I1122 00:22:41.543501  307961 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f9eec8a10bd3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:52:ca:94:eb:f4:44} reservation:<nil>}
	I1122 00:22:41.544363  307961 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1f7376f93c90 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:2e:05:2e:4b:93:54} reservation:<nil>}
	I1122 00:22:41.545430  307961 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5080}
	I1122 00:22:41.545458  307961 network_create.go:124] attempt to create docker network calico-687868 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1122 00:22:41.545511  307961 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-687868 calico-687868
	I1122 00:22:41.608428  307961 network_create.go:108] docker network calico-687868 192.168.94.0/24 created
	I1122 00:22:41.608468  307961 kic.go:121] calculated static IP "192.168.94.2" for the "calico-687868" container
	I1122 00:22:41.608545  307961 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:22:41.631287  307961 cli_runner.go:164] Run: docker volume create calico-687868 --label name.minikube.sigs.k8s.io=calico-687868 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:22:41.655689  307961 oci.go:103] Successfully created a docker volume calico-687868
	I1122 00:22:41.655805  307961 cli_runner.go:164] Run: docker run --rm --name calico-687868-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-687868 --entrypoint /usr/bin/test -v calico-687868:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:22:42.335620  307961 oci.go:107] Successfully prepared a docker volume calico-687868
	I1122 00:22:42.335703  307961 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:22:42.335721  307961 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:22:42.335806  307961 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-687868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:22:45.475485  299730 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:22:45.475511  299730 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:22:45.475576  299730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687868
	I1122 00:22:45.494021  299730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/kindnet-687868/id_rsa Username:docker}
	I1122 00:22:45.496762  299730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:22:45.598522  299730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:22:45.678485  299730 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:22:45.678512  299730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:22:45.678588  299730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687868
	I1122 00:22:45.697923  299730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/kindnet-687868/id_rsa Username:docker}
	I1122 00:22:45.798647  299730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:22:45.902187  299730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:22:45.902240  299730 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:22:46.892729  299730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.094037317s)
	I1122 00:22:46.893517  299730 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1122 00:22:46.895549  299730 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1122 00:22:45.640061  285133 node_ready.go:57] node "default-k8s-diff-port-418191" has "Ready":"False" status (will retry)
	W1122 00:22:47.640108  285133 node_ready.go:57] node "default-k8s-diff-port-418191" has "Ready":"False" status (will retry)
	I1122 00:22:48.139130  285133 node_ready.go:49] node "default-k8s-diff-port-418191" is "Ready"
	I1122 00:22:48.139162  285133 node_ready.go:38] duration metric: took 42.503124681s for node "default-k8s-diff-port-418191" to be "Ready" ...
	I1122 00:22:48.139176  285133 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:22:48.139227  285133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:22:48.151643  285133 api_server.go:72] duration metric: took 42.905389347s to wait for apiserver process to appear ...
	I1122 00:22:48.151672  285133 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:22:48.151691  285133 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1122 00:22:48.155857  285133 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1122 00:22:48.156901  285133 api_server.go:141] control plane version: v1.34.1
	I1122 00:22:48.156945  285133 api_server.go:131] duration metric: took 5.266963ms to wait for apiserver health ...
	I1122 00:22:48.156954  285133 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:22:48.159992  285133 system_pods.go:59] 8 kube-system pods found
	I1122 00:22:48.160024  285133 system_pods.go:61] "coredns-66bc5c9577-nft87" [73b10676-5bd9-4c0b-8e69-ecfd1e7373a8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:22:48.160030  285133 system_pods.go:61] "etcd-default-k8s-diff-port-418191" [2a5b7ded-2579-4cf5-80f9-9d0a659edec9] Running
	I1122 00:22:48.160036  285133 system_pods.go:61] "kindnet-p88n8" [054d63f4-c84a-4d5e-9731-d8dd34464e73] Running
	I1122 00:22:48.160040  285133 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-418191" [5b0a0fe7-7d96-4fe7-a4ce-08607e8d04da] Running
	I1122 00:22:48.160044  285133 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-418191" [b42b00df-9c81-49e6-a85c-f2f9b64ebead] Running
	I1122 00:22:48.160048  285133 system_pods.go:61] "kube-proxy-xf4dv" [ba054583-7e23-479e-a042-2c8fdf7c7b0a] Running
	I1122 00:22:48.160051  285133 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-418191" [45d3a2e6-ee3f-43e2-acf8-ce35d15e187d] Running
	I1122 00:22:48.160058  285133 system_pods.go:61] "storage-provisioner" [7c87a520-6723-4298-9c0d-6bde0b15aec8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:22:48.160065  285133 system_pods.go:74] duration metric: took 3.104931ms to wait for pod list to return data ...
	I1122 00:22:48.160074  285133 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:22:48.162421  285133 default_sa.go:45] found service account: "default"
	I1122 00:22:48.162440  285133 default_sa.go:55] duration metric: took 2.360963ms for default service account to be created ...
	I1122 00:22:48.162448  285133 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:22:48.165172  285133 system_pods.go:86] 8 kube-system pods found
	I1122 00:22:48.165211  285133 system_pods.go:89] "coredns-66bc5c9577-nft87" [73b10676-5bd9-4c0b-8e69-ecfd1e7373a8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:22:48.165219  285133 system_pods.go:89] "etcd-default-k8s-diff-port-418191" [2a5b7ded-2579-4cf5-80f9-9d0a659edec9] Running
	I1122 00:22:48.165230  285133 system_pods.go:89] "kindnet-p88n8" [054d63f4-c84a-4d5e-9731-d8dd34464e73] Running
	I1122 00:22:48.165236  285133 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-418191" [5b0a0fe7-7d96-4fe7-a4ce-08607e8d04da] Running
	I1122 00:22:48.165248  285133 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-418191" [b42b00df-9c81-49e6-a85c-f2f9b64ebead] Running
	I1122 00:22:48.165267  285133 system_pods.go:89] "kube-proxy-xf4dv" [ba054583-7e23-479e-a042-2c8fdf7c7b0a] Running
	I1122 00:22:48.165273  285133 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-418191" [45d3a2e6-ee3f-43e2-acf8-ce35d15e187d] Running
	I1122 00:22:48.165284  285133 system_pods.go:89] "storage-provisioner" [7c87a520-6723-4298-9c0d-6bde0b15aec8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:22:48.165333  285133 retry.go:31] will retry after 264.651933ms: missing components: kube-dns
	I1122 00:22:48.434909  285133 system_pods.go:86] 8 kube-system pods found
	I1122 00:22:48.434942  285133 system_pods.go:89] "coredns-66bc5c9577-nft87" [73b10676-5bd9-4c0b-8e69-ecfd1e7373a8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:22:48.434948  285133 system_pods.go:89] "etcd-default-k8s-diff-port-418191" [2a5b7ded-2579-4cf5-80f9-9d0a659edec9] Running
	I1122 00:22:48.434954  285133 system_pods.go:89] "kindnet-p88n8" [054d63f4-c84a-4d5e-9731-d8dd34464e73] Running
	I1122 00:22:48.434957  285133 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-418191" [5b0a0fe7-7d96-4fe7-a4ce-08607e8d04da] Running
	I1122 00:22:48.434961  285133 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-418191" [b42b00df-9c81-49e6-a85c-f2f9b64ebead] Running
	I1122 00:22:48.434964  285133 system_pods.go:89] "kube-proxy-xf4dv" [ba054583-7e23-479e-a042-2c8fdf7c7b0a] Running
	I1122 00:22:48.434968  285133 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-418191" [45d3a2e6-ee3f-43e2-acf8-ce35d15e187d] Running
	I1122 00:22:48.434973  285133 system_pods.go:89] "storage-provisioner" [7c87a520-6723-4298-9c0d-6bde0b15aec8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:22:48.434988  285133 retry.go:31] will retry after 316.912492ms: missing components: kube-dns
	I1122 00:22:48.755843  285133 system_pods.go:86] 8 kube-system pods found
	I1122 00:22:48.755873  285133 system_pods.go:89] "coredns-66bc5c9577-nft87" [73b10676-5bd9-4c0b-8e69-ecfd1e7373a8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:22:48.755880  285133 system_pods.go:89] "etcd-default-k8s-diff-port-418191" [2a5b7ded-2579-4cf5-80f9-9d0a659edec9] Running
	I1122 00:22:48.755916  285133 system_pods.go:89] "kindnet-p88n8" [054d63f4-c84a-4d5e-9731-d8dd34464e73] Running
	I1122 00:22:48.755923  285133 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-418191" [5b0a0fe7-7d96-4fe7-a4ce-08607e8d04da] Running
	I1122 00:22:48.755927  285133 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-418191" [b42b00df-9c81-49e6-a85c-f2f9b64ebead] Running
	I1122 00:22:48.755931  285133 system_pods.go:89] "kube-proxy-xf4dv" [ba054583-7e23-479e-a042-2c8fdf7c7b0a] Running
	I1122 00:22:48.755936  285133 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-418191" [45d3a2e6-ee3f-43e2-acf8-ce35d15e187d] Running
	I1122 00:22:48.755941  285133 system_pods.go:89] "storage-provisioner" [7c87a520-6723-4298-9c0d-6bde0b15aec8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:22:48.755960  285133 retry.go:31] will retry after 482.915275ms: missing components: kube-dns
	I1122 00:22:49.243694  285133 system_pods.go:86] 8 kube-system pods found
	I1122 00:22:49.243723  285133 system_pods.go:89] "coredns-66bc5c9577-nft87" [73b10676-5bd9-4c0b-8e69-ecfd1e7373a8] Running
	I1122 00:22:49.243729  285133 system_pods.go:89] "etcd-default-k8s-diff-port-418191" [2a5b7ded-2579-4cf5-80f9-9d0a659edec9] Running
	I1122 00:22:49.243734  285133 system_pods.go:89] "kindnet-p88n8" [054d63f4-c84a-4d5e-9731-d8dd34464e73] Running
	I1122 00:22:49.243737  285133 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-418191" [5b0a0fe7-7d96-4fe7-a4ce-08607e8d04da] Running
	I1122 00:22:49.243742  285133 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-418191" [b42b00df-9c81-49e6-a85c-f2f9b64ebead] Running
	I1122 00:22:49.243747  285133 system_pods.go:89] "kube-proxy-xf4dv" [ba054583-7e23-479e-a042-2c8fdf7c7b0a] Running
	I1122 00:22:49.243752  285133 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-418191" [45d3a2e6-ee3f-43e2-acf8-ce35d15e187d] Running
	I1122 00:22:49.243757  285133 system_pods.go:89] "storage-provisioner" [7c87a520-6723-4298-9c0d-6bde0b15aec8] Running
	I1122 00:22:49.243768  285133 system_pods.go:126] duration metric: took 1.081311961s to wait for k8s-apps to be running ...
	I1122 00:22:49.243778  285133 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:22:49.243837  285133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:22:49.258559  285133 system_svc.go:56] duration metric: took 14.768187ms WaitForService to wait for kubelet
	I1122 00:22:49.258592  285133 kubeadm.go:587] duration metric: took 44.012343316s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:22:49.258616  285133 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:22:49.261959  285133 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:22:49.261986  285133 node_conditions.go:123] node cpu capacity is 8
	I1122 00:22:49.262012  285133 node_conditions.go:105] duration metric: took 3.390936ms to run NodePressure ...
	I1122 00:22:49.262026  285133 start.go:242] waiting for startup goroutines ...
	I1122 00:22:49.262039  285133 start.go:247] waiting for cluster config update ...
	I1122 00:22:49.262057  285133 start.go:256] writing updated cluster config ...
	I1122 00:22:49.262356  285133 ssh_runner.go:195] Run: rm -f paused
	I1122 00:22:49.266584  285133 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:22:49.270363  285133 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nft87" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.275064  285133 pod_ready.go:94] pod "coredns-66bc5c9577-nft87" is "Ready"
	I1122 00:22:49.275085  285133 pod_ready.go:86] duration metric: took 4.697539ms for pod "coredns-66bc5c9577-nft87" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.277096  285133 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.281295  285133 pod_ready.go:94] pod "etcd-default-k8s-diff-port-418191" is "Ready"
	I1122 00:22:49.281318  285133 pod_ready.go:86] duration metric: took 4.20058ms for pod "etcd-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.283153  285133 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.287048  285133 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-418191" is "Ready"
	I1122 00:22:49.287072  285133 pod_ready.go:86] duration metric: took 3.900916ms for pod "kube-apiserver-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.289212  285133 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.671236  285133 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-418191" is "Ready"
	I1122 00:22:49.671298  285133 pod_ready.go:86] duration metric: took 382.060516ms for pod "kube-controller-manager-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.871838  285133 pod_ready.go:83] waiting for pod "kube-proxy-xf4dv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:50.271197  285133 pod_ready.go:94] pod "kube-proxy-xf4dv" is "Ready"
	I1122 00:22:50.271225  285133 pod_ready.go:86] duration metric: took 399.35647ms for pod "kube-proxy-xf4dv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:50.471995  285133 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:50.871573  285133 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-418191" is "Ready"
	I1122 00:22:50.871605  285133 pod_ready.go:86] duration metric: took 399.582988ms for pod "kube-scheduler-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:50.871619  285133 pod_ready.go:40] duration metric: took 1.60500287s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:22:50.916847  285133 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:22:50.918772  285133 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-418191" cluster and "default" namespace by default
	I1122 00:22:46.900904  307961 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-687868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.565014405s)
	I1122 00:22:46.900940  307961 kic.go:203] duration metric: took 4.565214758s to extract preloaded images to volume ...
	W1122 00:22:46.901050  307961 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:22:46.901116  307961 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:22:46.901169  307961 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:22:46.972085  307961 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-687868 --name calico-687868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-687868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-687868 --network calico-687868 --ip 192.168.94.2 --volume calico-687868:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:22:47.316449  307961 cli_runner.go:164] Run: docker container inspect calico-687868 --format={{.State.Running}}
	I1122 00:22:47.339111  307961 cli_runner.go:164] Run: docker container inspect calico-687868 --format={{.State.Status}}
	I1122 00:22:47.360854  307961 cli_runner.go:164] Run: docker exec calico-687868 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:22:47.408960  307961 oci.go:144] the created container "calico-687868" has a running status.
	I1122 00:22:47.408988  307961 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa...
	I1122 00:22:47.502047  307961 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:22:47.530508  307961 cli_runner.go:164] Run: docker container inspect calico-687868 --format={{.State.Status}}
	I1122 00:22:47.555115  307961 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:22:47.555144  307961 kic_runner.go:114] Args: [docker exec --privileged calico-687868 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:22:47.602377  307961 cli_runner.go:164] Run: docker container inspect calico-687868 --format={{.State.Status}}
	I1122 00:22:47.631356  307961 machine.go:94] provisionDockerMachine start ...
	I1122 00:22:47.631467  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:47.657568  307961 main.go:143] libmachine: Using SSH client type: native
	I1122 00:22:47.657897  307961 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1122 00:22:47.657915  307961 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:22:47.658528  307961 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49886->127.0.0.1:33118: read: connection reset by peer
	I1122 00:22:50.785488  307961 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-687868
	
	I1122 00:22:50.785520  307961 ubuntu.go:182] provisioning hostname "calico-687868"
	I1122 00:22:50.785594  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:50.804997  307961 main.go:143] libmachine: Using SSH client type: native
	I1122 00:22:50.805303  307961 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1122 00:22:50.805325  307961 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-687868 && echo "calico-687868" | sudo tee /etc/hostname
	I1122 00:22:50.945430  307961 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-687868
	
	I1122 00:22:50.945516  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:50.967847  307961 main.go:143] libmachine: Using SSH client type: native
	I1122 00:22:50.968131  307961 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1122 00:22:50.968174  307961 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-687868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-687868/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-687868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:22:51.096113  307961 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:22:51.096143  307961 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9059/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9059/.minikube}
	I1122 00:22:51.096177  307961 ubuntu.go:190] setting up certificates
	I1122 00:22:51.096190  307961 provision.go:84] configureAuth start
	I1122 00:22:51.096253  307961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-687868
	I1122 00:22:51.116714  307961 provision.go:143] copyHostCerts
	I1122 00:22:51.116784  307961 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem, removing ...
	I1122 00:22:51.116793  307961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem
	I1122 00:22:51.116879  307961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem (1082 bytes)
	I1122 00:22:51.116992  307961 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem, removing ...
	I1122 00:22:51.117001  307961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem
	I1122 00:22:51.117030  307961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem (1123 bytes)
	I1122 00:22:51.117103  307961 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem, removing ...
	I1122 00:22:51.117110  307961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem
	I1122 00:22:51.117145  307961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem (1679 bytes)
	I1122 00:22:51.117252  307961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem org=jenkins.calico-687868 san=[127.0.0.1 192.168.94.2 calico-687868 localhost minikube]
	I1122 00:22:51.177363  307961 provision.go:177] copyRemoteCerts
	I1122 00:22:51.177423  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:22:51.177456  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:51.198362  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa Username:docker}
	I1122 00:22:46.896251  299730 node_ready.go:35] waiting up to 15m0s for node "kindnet-687868" to be "Ready" ...
	I1122 00:22:46.897843  299730 addons.go:530] duration metric: took 1.488060967s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1122 00:22:47.399318  299730 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-687868" context rescaled to 1 replicas
	W1122 00:22:48.899298  299730 node_ready.go:57] node "kindnet-687868" has "Ready":"False" status (will retry)
	W1122 00:22:51.399068  299730 node_ready.go:57] node "kindnet-687868" has "Ready":"False" status (will retry)
	I1122 00:22:51.295172  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:22:51.317014  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1122 00:22:51.335921  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:22:51.354877  307961 provision.go:87] duration metric: took 258.665412ms to configureAuth
	I1122 00:22:51.354913  307961 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:22:51.355076  307961 config.go:182] Loaded profile config "calico-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:22:51.355087  307961 machine.go:97] duration metric: took 3.723704226s to provisionDockerMachine
	I1122 00:22:51.355094  307961 client.go:176] duration metric: took 9.882685909s to LocalClient.Create
	I1122 00:22:51.355113  307961 start.go:167] duration metric: took 9.882752592s to libmachine.API.Create "calico-687868"
	I1122 00:22:51.355122  307961 start.go:293] postStartSetup for "calico-687868" (driver="docker")
	I1122 00:22:51.355131  307961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:22:51.355184  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:22:51.355220  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:51.375710  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa Username:docker}
	I1122 00:22:51.473418  307961 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:22:51.477989  307961 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:22:51.478018  307961 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:22:51.478030  307961 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/addons for local assets ...
	I1122 00:22:51.478090  307961 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/files for local assets ...
	I1122 00:22:51.478199  307961 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem -> 145302.pem in /etc/ssl/certs
	I1122 00:22:51.478361  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:22:51.486766  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:22:51.510722  307961 start.go:296] duration metric: took 155.583338ms for postStartSetup
	I1122 00:22:51.511218  307961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-687868
	I1122 00:22:51.534970  307961 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/config.json ...
	I1122 00:22:51.535387  307961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:22:51.535441  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:51.555661  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa Username:docker}
	I1122 00:22:51.645695  307961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:22:51.651079  307961 start.go:128] duration metric: took 10.182769215s to createHost
	I1122 00:22:51.651106  307961 start.go:83] releasing machines lock for "calico-687868", held for 10.182952096s
	I1122 00:22:51.651183  307961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-687868
	I1122 00:22:51.670945  307961 ssh_runner.go:195] Run: cat /version.json
	I1122 00:22:51.670997  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:51.671016  307961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:22:51.671104  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:51.692940  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa Username:docker}
	I1122 00:22:51.693168  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa Username:docker}
	I1122 00:22:51.844530  307961 ssh_runner.go:195] Run: systemctl --version
	I1122 00:22:51.851536  307961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:22:51.857051  307961 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:22:51.857119  307961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:22:51.884307  307961 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:22:51.884331  307961 start.go:496] detecting cgroup driver to use...
	I1122 00:22:51.884364  307961 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:22:51.884417  307961 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:22:51.901060  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:22:51.916496  307961 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:22:51.916561  307961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:22:51.934812  307961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:22:51.954675  307961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:22:52.040868  307961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:22:52.133710  307961 docker.go:234] disabling docker service ...
	I1122 00:22:52.133770  307961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:22:52.153908  307961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:22:52.167469  307961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:22:52.253225  307961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:22:52.333598  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:22:52.346545  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:22:52.362241  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:22:52.373432  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:22:52.383542  307961 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1122 00:22:52.383609  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1122 00:22:52.393239  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:22:52.403856  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:22:52.415204  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:22:52.426975  307961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:22:52.437746  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:22:52.447374  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:22:52.456669  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:22:52.466513  307961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:22:52.474424  307961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:22:52.482547  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:22:52.565198  307961 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:22:52.667997  307961 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:22:52.668074  307961 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:22:52.672213  307961 start.go:564] Will wait 60s for crictl version
	I1122 00:22:52.672295  307961 ssh_runner.go:195] Run: which crictl
	I1122 00:22:52.676244  307961 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:22:52.705563  307961 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:22:52.705639  307961 ssh_runner.go:195] Run: containerd --version
	I1122 00:22:52.727577  307961 ssh_runner.go:195] Run: containerd --version
	I1122 00:22:52.752328  307961 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1122 00:22:52.753949  307961 cli_runner.go:164] Run: docker network inspect calico-687868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:22:52.774961  307961 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1122 00:22:52.779758  307961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:22:52.791083  307961 kubeadm.go:884] updating cluster {Name:calico-687868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-687868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:22:52.791246  307961 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:22:52.791324  307961 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:22:52.817072  307961 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:22:52.817096  307961 containerd.go:534] Images already preloaded, skipping extraction
	I1122 00:22:52.817153  307961 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:22:52.842371  307961 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:22:52.842395  307961 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:22:52.842402  307961 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1122 00:22:52.842488  307961 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-687868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-687868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1122 00:22:52.842546  307961 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:22:52.870293  307961 cni.go:84] Creating CNI manager for "calico"
	I1122 00:22:52.870324  307961 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:22:52.870347  307961 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-687868 NodeName:calico-687868 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:22:52.870470  307961 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-687868"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:22:52.870535  307961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:22:52.879365  307961 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:22:52.879445  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:22:52.887603  307961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1122 00:22:52.902654  307961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:22:52.919678  307961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1122 00:22:52.933422  307961 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:22:52.937512  307961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:22:52.948245  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:22:53.032432  307961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:22:53.056807  307961 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868 for IP: 192.168.94.2
	I1122 00:22:53.056828  307961 certs.go:195] generating shared ca certs ...
	I1122 00:22:53.056843  307961 certs.go:227] acquiring lock for ca certs: {Name:mkcee17f48cab2703d4de8a78a6fb8af44d9e7e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.057053  307961 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.key
	I1122 00:22:53.057111  307961 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.key
	I1122 00:22:53.057133  307961 certs.go:257] generating profile certs ...
	I1122 00:22:53.057219  307961 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/client.key
	I1122 00:22:53.057243  307961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/client.crt with IP's: []
	I1122 00:22:53.088755  307961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/client.crt ...
	I1122 00:22:53.088785  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/client.crt: {Name:mk84add15339a60b5ccef24fe9963e725101e6f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.088964  307961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/client.key ...
	I1122 00:22:53.088978  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/client.key: {Name:mka7fa3bf79a958c62b6bebc82776d155235b2be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.089091  307961 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.key.0cb251a4
	I1122 00:22:53.089108  307961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.crt.0cb251a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1122 00:22:53.157807  307961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.crt.0cb251a4 ...
	I1122 00:22:53.157834  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.crt.0cb251a4: {Name:mkf02a48ab788e195a706993334f63af02ad209f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.157993  307961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.key.0cb251a4 ...
	I1122 00:22:53.158006  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.key.0cb251a4: {Name:mk5b5739150ad75a952c110a888cc20869247d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.158077  307961 certs.go:382] copying /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.crt.0cb251a4 -> /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.crt
	I1122 00:22:53.158160  307961 certs.go:386] copying /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.key.0cb251a4 -> /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.key
	I1122 00:22:53.158222  307961 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.key
	I1122 00:22:53.158237  307961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.crt with IP's: []
	I1122 00:22:53.315075  307961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.crt ...
	I1122 00:22:53.315105  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.crt: {Name:mk5bd4e0fee11dec51d745641c9157754959840a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.315303  307961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.key ...
	I1122 00:22:53.315318  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.key: {Name:mke0af0fbf2534370a5a96beb58d05ea10807b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.315504  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530.pem (1338 bytes)
	W1122 00:22:53.315544  307961 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530_empty.pem, impossibly tiny 0 bytes
	I1122 00:22:53.315554  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem (1675 bytes)
	I1122 00:22:53.315579  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem (1082 bytes)
	I1122 00:22:53.315609  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:22:53.315632  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem (1679 bytes)
	I1122 00:22:53.315674  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:22:53.316303  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:22:53.335675  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:22:53.353713  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:22:53.371975  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1122 00:22:53.390816  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1122 00:22:53.409886  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:22:53.429071  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:22:53.448549  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:22:53.467981  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530.pem --> /usr/share/ca-certificates/14530.pem (1338 bytes)
	I1122 00:22:53.490504  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /usr/share/ca-certificates/145302.pem (1708 bytes)
	I1122 00:22:53.512350  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:22:53.534724  307961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:22:53.551641  307961 ssh_runner.go:195] Run: openssl version
	I1122 00:22:53.558904  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:22:53.570019  307961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:22:53.575175  307961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:22:53.575238  307961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:22:53.614924  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:22:53.624473  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14530.pem && ln -fs /usr/share/ca-certificates/14530.pem /etc/ssl/certs/14530.pem"
	I1122 00:22:53.635695  307961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14530.pem
	I1122 00:22:53.640743  307961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14530.pem
	I1122 00:22:53.640812  307961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14530.pem
	I1122 00:22:53.676075  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14530.pem /etc/ssl/certs/51391683.0"
	I1122 00:22:53.685329  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145302.pem && ln -fs /usr/share/ca-certificates/145302.pem /etc/ssl/certs/145302.pem"
	I1122 00:22:53.695018  307961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145302.pem
	I1122 00:22:53.699097  307961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145302.pem
	I1122 00:22:53.699164  307961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145302.pem
	I1122 00:22:53.736102  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145302.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:22:53.745095  307961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:22:53.748884  307961 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:22:53.748988  307961 kubeadm.go:401] StartCluster: {Name:calico-687868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-687868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:22:53.749092  307961 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:22:53.749146  307961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:22:53.777622  307961 cri.go:89] found id: ""
	I1122 00:22:53.777698  307961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:22:53.786436  307961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:22:53.795001  307961 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:22:53.795065  307961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:22:53.803020  307961 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:22:53.803038  307961 kubeadm.go:158] found existing configuration files:
	
	I1122 00:22:53.803099  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:22:53.812014  307961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:22:53.812073  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:22:53.820127  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:22:53.828878  307961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:22:53.828945  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:22:53.838059  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:22:53.847366  307961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:22:53.847436  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:22:53.856322  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:22:53.865065  307961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:22:53.865134  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:22:53.872822  307961 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:22:53.935133  307961 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1122 00:22:53.997562  307961 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1122 00:22:53.399522  299730 node_ready.go:57] node "kindnet-687868" has "Ready":"False" status (will retry)
	W1122 00:22:55.400183  299730 node_ready.go:57] node "kindnet-687868" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	8cb631d796c4a       56cc512116c8f       7 seconds ago        Running             busybox                   0                   bd7fc0ae3a9e4       busybox                                                default
	8af2857f48115       52546a367cc9e       13 seconds ago       Running             coredns                   0                   acd81cf74652c       coredns-66bc5c9577-nft87                               kube-system
	1c7a5352ca64e       6e38f40d628db       13 seconds ago       Running             storage-provisioner       0                   7a69de2d42319       storage-provisioner                                    kube-system
	a69c9c4b16631       fc25172553d79       54 seconds ago       Running             kube-proxy                0                   9cb31bdfbde51       kube-proxy-xf4dv                                       kube-system
	f5fede7e17e78       409467f978b4a       54 seconds ago       Running             kindnet-cni               0                   829a2beb85956       kindnet-p88n8                                          kube-system
	ae1b0ba64f1c9       c80c8dbafe7dd       About a minute ago   Running             kube-controller-manager   0                   f5c3464f944b6       kube-controller-manager-default-k8s-diff-port-418191   kube-system
	c3a7e2e1d4e18       7dd6aaa1717ab       About a minute ago   Running             kube-scheduler            0                   db6014bcc06c7       kube-scheduler-default-k8s-diff-port-418191            kube-system
	c8c0e9532df5c       5f1f5298c888d       About a minute ago   Running             etcd                      0                   a3332475a9e1b       etcd-default-k8s-diff-port-418191                      kube-system
	e3c5f6695cef6       c3994bc696102       About a minute ago   Running             kube-apiserver            0                   fc6387175ded9       kube-apiserver-default-k8s-diff-port-418191            kube-system
	
	
	==> containerd <==
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.299034299Z" level=info msg="CreateContainer within sandbox \"7a69de2d42319c13c183b73ad653be56f5f6977a2b22587a8f007f92cdfece1d\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"1c7a5352ca64ea1fd8d75825ba8b5b9b860a37d4a5ebd0314369cfebccf8a213\""
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.300078781Z" level=info msg="StartContainer for \"1c7a5352ca64ea1fd8d75825ba8b5b9b860a37d4a5ebd0314369cfebccf8a213\""
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.300757825Z" level=info msg="Container 8af2857f4811594e175fd3972595724d8e79fb6f60717af2ac2a5679742a45c8: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.301699225Z" level=info msg="connecting to shim 1c7a5352ca64ea1fd8d75825ba8b5b9b860a37d4a5ebd0314369cfebccf8a213" address="unix:///run/containerd/s/1332691ed35dbde5a1e5ae76501320189df47d59d0c196e04b91786f4272bf3b" protocol=ttrpc version=3
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.307894511Z" level=info msg="CreateContainer within sandbox \"acd81cf74652c5e8ad0fb65874ed2794660f7fd697c3cec3e458fe04f75fa31c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8af2857f4811594e175fd3972595724d8e79fb6f60717af2ac2a5679742a45c8\""
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.308530378Z" level=info msg="StartContainer for \"8af2857f4811594e175fd3972595724d8e79fb6f60717af2ac2a5679742a45c8\""
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.309701335Z" level=info msg="connecting to shim 8af2857f4811594e175fd3972595724d8e79fb6f60717af2ac2a5679742a45c8" address="unix:///run/containerd/s/8d11fb0aaa540c858a179d9068f0f438256decff6b9ed79d10fe3cd496768944" protocol=ttrpc version=3
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.364574911Z" level=info msg="StartContainer for \"1c7a5352ca64ea1fd8d75825ba8b5b9b860a37d4a5ebd0314369cfebccf8a213\" returns successfully"
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.369393687Z" level=info msg="StartContainer for \"8af2857f4811594e175fd3972595724d8e79fb6f60717af2ac2a5679742a45c8\" returns successfully"
	Nov 22 00:22:51 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:51.392957281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:62d761e0-90e8-4ae1-98f2-3a0febcc01d1,Namespace:default,Attempt:0,}"
	Nov 22 00:22:51 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:51.434296758Z" level=info msg="connecting to shim bd7fc0ae3a9e449b83a429aeda2086d0ab08ea3249db59e600d179c219dc212d" address="unix:///run/containerd/s/d3afb5789e08cd77833d2528991c71e11fd7244542185b7ad39ea967e5ea400d" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:22:51 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:51.504968867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:62d761e0-90e8-4ae1-98f2-3a0febcc01d1,Namespace:default,Attempt:0,} returns sandbox id \"bd7fc0ae3a9e449b83a429aeda2086d0ab08ea3249db59e600d179c219dc212d\""
	Nov 22 00:22:51 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:51.507193613Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.555399473Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.556351066Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.557790325Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.559802719Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.560515407Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.053268118s"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.560558901Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.565142218Z" level=info msg="CreateContainer within sandbox \"bd7fc0ae3a9e449b83a429aeda2086d0ab08ea3249db59e600d179c219dc212d\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.573524922Z" level=info msg="Container 8cb631d796c4a8d910dd262320e37f2f12249d644a633c4386a571a9ecb52570: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.583735418Z" level=info msg="CreateContainer within sandbox \"bd7fc0ae3a9e449b83a429aeda2086d0ab08ea3249db59e600d179c219dc212d\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8cb631d796c4a8d910dd262320e37f2f12249d644a633c4386a571a9ecb52570\""
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.584467142Z" level=info msg="StartContainer for \"8cb631d796c4a8d910dd262320e37f2f12249d644a633c4386a571a9ecb52570\""
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.585490347Z" level=info msg="connecting to shim 8cb631d796c4a8d910dd262320e37f2f12249d644a633c4386a571a9ecb52570" address="unix:///run/containerd/s/d3afb5789e08cd77833d2528991c71e11fd7244542185b7ad39ea967e5ea400d" protocol=ttrpc version=3
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.641010933Z" level=info msg="StartContainer for \"8cb631d796c4a8d910dd262320e37f2f12249d644a633c4386a571a9ecb52570\" returns successfully"
	
	
	==> coredns [8af2857f4811594e175fd3972595724d8e79fb6f60717af2ac2a5679742a45c8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49634 - 16478 "HINFO IN 3208574080046988828.3383671134030109306. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067175049s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-418191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-418191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=default-k8s-diff-port-418191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_22_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:21:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-418191
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:23:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:23:01 +0000   Sat, 22 Nov 2025 00:21:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:23:01 +0000   Sat, 22 Nov 2025 00:21:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:23:01 +0000   Sat, 22 Nov 2025 00:21:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:23:01 +0000   Sat, 22 Nov 2025 00:22:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-418191
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                31190414-abfb-47c3-96b3-6eea69cb23df
	  Boot ID:                    725aae03-f893-4e0b-b029-cbd3b00ccfdd
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-nft87                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     56s
	  kube-system                 etcd-default-k8s-diff-port-418191                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         61s
	  kube-system                 kindnet-p88n8                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-418191             250m (3%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-418191    200m (2%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-xf4dv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-418191             100m (1%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-418191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node default-k8s-diff-port-418191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s (x7 over 69s)  kubelet          Node default-k8s-diff-port-418191 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-418191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-418191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-418191 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node default-k8s-diff-port-418191 event: Registered Node default-k8s-diff-port-418191 in Controller
	  Normal  NodeReady                14s                kubelet          Node default-k8s-diff-port-418191 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000865] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.410276] i8042: Warning: Keylock active
	[  +0.014947] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.495836] block sda: the capability attribute has been deprecated.
	[  +0.091740] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024333] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.452540] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [c8c0e9532df5ca107cdb178f44b3fc3459121a60dedba47feec5fad7f04c4c90] <==
	{"level":"warn","ts":"2025-11-22T00:21:56.693635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.701180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.710724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.718404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.726916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.735205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.742418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.749783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.757387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.765959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.773942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.790194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.797568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.813420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.822071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.830168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.901594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:22:03.123274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.924086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" limit:1 ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2025-11-22T00:22:03.123381Z","caller":"traceutil/trace.go:172","msg":"trace[180139130] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:326; }","duration":"108.061525ms","start":"2025-11-22T00:22:03.015297Z","end":"2025-11-22T00:22:03.123358Z","steps":["trace[180139130] 'range keys from in-memory index tree'  (duration: 107.765304ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:22:03.417201Z","caller":"traceutil/trace.go:172","msg":"trace[1627833544] transaction","detail":"{read_only:false; response_revision:327; number_of_response:1; }","duration":"101.297202ms","start":"2025-11-22T00:22:03.315885Z","end":"2025-11-22T00:22:03.417183Z","steps":["trace[1627833544] 'process raft request'  (duration: 101.182062ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:22:03.664903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.792131ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-11-22T00:22:03.664973Z","caller":"traceutil/trace.go:172","msg":"trace[2125039577] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:327; }","duration":"198.897465ms","start":"2025-11-22T00:22:03.466061Z","end":"2025-11-22T00:22:03.664958Z","steps":["trace[2125039577] 'range keys from in-memory index tree'  (duration: 198.627516ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:22:03.875435Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.54148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-11-22T00:22:03.875515Z","caller":"traceutil/trace.go:172","msg":"trace[507791274] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:328; }","duration":"109.634279ms","start":"2025-11-22T00:22:03.765866Z","end":"2025-11-22T00:22:03.875500Z","steps":["trace[507791274] 'range keys from in-memory index tree'  (duration: 109.395481ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:22:04.002519Z","caller":"traceutil/trace.go:172","msg":"trace[164048686] transaction","detail":"{read_only:false; response_revision:329; number_of_response:1; }","duration":"123.699527ms","start":"2025-11-22T00:22:03.878798Z","end":"2025-11-22T00:22:04.002498Z","steps":["trace[164048686] 'process raft request'  (duration: 123.575665ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:23:01 up  1:05,  0 user,  load average: 5.03, 3.98, 2.59
	Linux default-k8s-diff-port-418191 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f5fede7e17e78030ced629bfd5f2ba2e1dd6d87907e9ede26582b5d1a6cf0f01] <==
	I1122 00:22:07.438950       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:22:07.439213       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1122 00:22:07.439421       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:22:07.439444       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:22:07.439471       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:22:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:22:07.834011       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:22:07.834092       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:22:07.834112       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:22:07.834328       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:22:37.737066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:22:37.737066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1122 00:22:37.737095       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:22:37.737116       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1122 00:22:39.234994       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:22:39.235035       1 metrics.go:72] Registering metrics
	I1122 00:22:39.235100       1 controller.go:711] "Syncing nftables rules"
	I1122 00:22:47.743345       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1122 00:22:47.743425       1 main.go:301] handling current node
	I1122 00:22:57.738515       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1122 00:22:57.738561       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e3c5f6695cef635b978ec2213007737e56d02278c0a48da875ac581bafc81526] <==
	I1122 00:21:57.479119       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:21:57.479156       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1122 00:21:57.489202       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:21:57.490242       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:21:57.499549       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:21:57.500744       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:21:57.673524       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:21:58.377322       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:21:58.381983       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:21:58.382002       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:21:58.994599       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:21:59.035954       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:21:59.186377       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:21:59.193008       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1122 00:21:59.194302       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:21:59.198757       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:21:59.419323       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:22:00.091661       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:22:00.109953       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:22:00.118448       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:22:04.424672       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:22:04.430411       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:22:05.074551       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:22:05.328099       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1122 00:23:00.213695       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:42548: use of closed network connection
	
	
	==> kube-controller-manager [ae1b0ba64f1c92f88c5ebbaa061488c685e44c2a1a90f6645307c306d0c8c6f5] <==
	I1122 00:22:04.418886       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:22:04.418910       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:22:04.419438       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:22:04.419512       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:22:04.419686       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:22:04.423021       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:22:04.426629       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:22:04.426745       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:22:04.428920       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:22:04.428962       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:22:04.428970       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:22:04.431525       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:22:04.441394       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:22:04.441485       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:22:04.445620       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:22:04.448708       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:22:04.456991       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:22:04.465347       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:22:04.467071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:22:04.468083       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:22:04.468386       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:22:04.469303       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:22:04.470544       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:22:04.470554       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:22:49.414946       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a69c9c4b166317f37104328a28376df7c7031e617c3f84dd468678fe6713add6] <==
	I1122 00:22:07.523431       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:22:07.581555       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:22:07.681740       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:22:07.681795       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1122 00:22:07.681945       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:22:07.708325       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:22:07.708399       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:22:07.716033       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:22:07.716632       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:22:07.716661       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:22:07.718326       1 config.go:200] "Starting service config controller"
	I1122 00:22:07.718352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:22:07.718434       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:22:07.718450       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:22:07.718516       1 config.go:309] "Starting node config controller"
	I1122 00:22:07.718523       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:22:07.718530       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:22:07.718640       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:22:07.718651       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:22:07.818748       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:22:07.818783       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:22:07.818706       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c3a7e2e1d4e184c40402418868bbc0ed22ca5a6fd75de2dd42a93a8caf2c8ab6] <==
	I1122 00:21:58.103200       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:21:58.105033       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:21:58.105279       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:21:58.105938       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:21:58.106206       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:21:58.108517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:21:58.108599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:21:58.108911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1122 00:21:58.109946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:21:58.109986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:21:58.110053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:21:58.110064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:21:58.110066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:21:58.110066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:21:58.110179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:21:58.110240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:21:58.110285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:21:58.110369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:21:58.110370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:21:58.110377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:21:58.110374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:21:58.110441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:21:58.110459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:21:58.110523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1122 00:21:59.505990       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:22:04 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:04.428234    1449 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:22:04 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:04.429423    1449 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: E1122 00:22:05.368012    1449 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:default-k8s-diff-port-418191\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-418191' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.375441    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba054583-7e23-479e-a042-2c8fdf7c7b0a-lib-modules\") pod \"kube-proxy-xf4dv\" (UID: \"ba054583-7e23-479e-a042-2c8fdf7c7b0a\") " pod="kube-system/kube-proxy-xf4dv"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.375514    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7srt4\" (UniqueName: \"kubernetes.io/projected/ba054583-7e23-479e-a042-2c8fdf7c7b0a-kube-api-access-7srt4\") pod \"kube-proxy-xf4dv\" (UID: \"ba054583-7e23-479e-a042-2c8fdf7c7b0a\") " pod="kube-system/kube-proxy-xf4dv"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.375553    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ba054583-7e23-479e-a042-2c8fdf7c7b0a-kube-proxy\") pod \"kube-proxy-xf4dv\" (UID: \"ba054583-7e23-479e-a042-2c8fdf7c7b0a\") " pod="kube-system/kube-proxy-xf4dv"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.375577    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba054583-7e23-479e-a042-2c8fdf7c7b0a-xtables-lock\") pod \"kube-proxy-xf4dv\" (UID: \"ba054583-7e23-479e-a042-2c8fdf7c7b0a\") " pod="kube-system/kube-proxy-xf4dv"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.475994    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/054d63f4-c84a-4d5e-9731-d8dd34464e73-xtables-lock\") pod \"kindnet-p88n8\" (UID: \"054d63f4-c84a-4d5e-9731-d8dd34464e73\") " pod="kube-system/kindnet-p88n8"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.476052    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsn8w\" (UniqueName: \"kubernetes.io/projected/054d63f4-c84a-4d5e-9731-d8dd34464e73-kube-api-access-xsn8w\") pod \"kindnet-p88n8\" (UID: \"054d63f4-c84a-4d5e-9731-d8dd34464e73\") " pod="kube-system/kindnet-p88n8"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.476154    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/054d63f4-c84a-4d5e-9731-d8dd34464e73-cni-cfg\") pod \"kindnet-p88n8\" (UID: \"054d63f4-c84a-4d5e-9731-d8dd34464e73\") " pod="kube-system/kindnet-p88n8"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.476629    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/054d63f4-c84a-4d5e-9731-d8dd34464e73-lib-modules\") pod \"kindnet-p88n8\" (UID: \"054d63f4-c84a-4d5e-9731-d8dd34464e73\") " pod="kube-system/kindnet-p88n8"
	Nov 22 00:22:06 default-k8s-diff-port-418191 kubelet[1449]: E1122 00:22:06.486297    1449 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 22 00:22:06 default-k8s-diff-port-418191 kubelet[1449]: E1122 00:22:06.486350    1449 projected.go:196] Error preparing data for projected volume kube-api-access-7srt4 for pod kube-system/kube-proxy-xf4dv: failed to sync configmap cache: timed out waiting for the condition
	Nov 22 00:22:06 default-k8s-diff-port-418191 kubelet[1449]: E1122 00:22:06.486482    1449 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba054583-7e23-479e-a042-2c8fdf7c7b0a-kube-api-access-7srt4 podName:ba054583-7e23-479e-a042-2c8fdf7c7b0a nodeName:}" failed. No retries permitted until 2025-11-22 00:22:06.986444996 +0000 UTC m=+7.129472238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7srt4" (UniqueName: "kubernetes.io/projected/ba054583-7e23-479e-a042-2c8fdf7c7b0a-kube-api-access-7srt4") pod "kube-proxy-xf4dv" (UID: "ba054583-7e23-479e-a042-2c8fdf7c7b0a") : failed to sync configmap cache: timed out waiting for the condition
	Nov 22 00:22:08 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:08.021074    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xf4dv" podStartSLOduration=3.021053473 podStartE2EDuration="3.021053473s" podCreationTimestamp="2025-11-22 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:22:08.02041431 +0000 UTC m=+8.163441554" watchObservedRunningTime="2025-11-22 00:22:08.021053473 +0000 UTC m=+8.164080715"
	Nov 22 00:22:08 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:08.052517    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-p88n8" podStartSLOduration=3.05249855 podStartE2EDuration="3.05249855s" podCreationTimestamp="2025-11-22 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:22:08.051950793 +0000 UTC m=+8.194978035" watchObservedRunningTime="2025-11-22 00:22:08.05249855 +0000 UTC m=+8.195525790"
	Nov 22 00:22:47 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:47.835791    1449 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:22:47 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:47.882540    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7c87a520-6723-4298-9c0d-6bde0b15aec8-tmp\") pod \"storage-provisioner\" (UID: \"7c87a520-6723-4298-9c0d-6bde0b15aec8\") " pod="kube-system/storage-provisioner"
	Nov 22 00:22:47 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:47.882598    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wqgc\" (UniqueName: \"kubernetes.io/projected/7c87a520-6723-4298-9c0d-6bde0b15aec8-kube-api-access-8wqgc\") pod \"storage-provisioner\" (UID: \"7c87a520-6723-4298-9c0d-6bde0b15aec8\") " pod="kube-system/storage-provisioner"
	Nov 22 00:22:47 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:47.882623    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b10676-5bd9-4c0b-8e69-ecfd1e7373a8-config-volume\") pod \"coredns-66bc5c9577-nft87\" (UID: \"73b10676-5bd9-4c0b-8e69-ecfd1e7373a8\") " pod="kube-system/coredns-66bc5c9577-nft87"
	Nov 22 00:22:47 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:47.882715    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6vmp\" (UniqueName: \"kubernetes.io/projected/73b10676-5bd9-4c0b-8e69-ecfd1e7373a8-kube-api-access-k6vmp\") pod \"coredns-66bc5c9577-nft87\" (UID: \"73b10676-5bd9-4c0b-8e69-ecfd1e7373a8\") " pod="kube-system/coredns-66bc5c9577-nft87"
	Nov 22 00:22:49 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:49.116103    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nft87" podStartSLOduration=44.116081028 podStartE2EDuration="44.116081028s" podCreationTimestamp="2025-11-22 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:22:49.116046066 +0000 UTC m=+49.259073319" watchObservedRunningTime="2025-11-22 00:22:49.116081028 +0000 UTC m=+49.259108271"
	Nov 22 00:22:49 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:49.139120    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=44.139092491 podStartE2EDuration="44.139092491s" podCreationTimestamp="2025-11-22 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:22:49.128313366 +0000 UTC m=+49.271340606" watchObservedRunningTime="2025-11-22 00:22:49.139092491 +0000 UTC m=+49.282119736"
	Nov 22 00:22:51 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:51.102194    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crnjt\" (UniqueName: \"kubernetes.io/projected/62d761e0-90e8-4ae1-98f2-3a0febcc01d1-kube-api-access-crnjt\") pod \"busybox\" (UID: \"62d761e0-90e8-4ae1-98f2-3a0febcc01d1\") " pod="default/busybox"
	Nov 22 00:22:54 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:54.129962    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.075017629 podStartE2EDuration="3.129910601s" podCreationTimestamp="2025-11-22 00:22:51 +0000 UTC" firstStartedPulling="2025-11-22 00:22:51.506690663 +0000 UTC m=+51.649717900" lastFinishedPulling="2025-11-22 00:22:53.561583638 +0000 UTC m=+53.704610872" observedRunningTime="2025-11-22 00:22:54.129699022 +0000 UTC m=+54.272726265" watchObservedRunningTime="2025-11-22 00:22:54.129910601 +0000 UTC m=+54.272937842"
	
	
	==> storage-provisioner [1c7a5352ca64ea1fd8d75825ba8b5b9b860a37d4a5ebd0314369cfebccf8a213] <==
	I1122 00:22:48.375217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:22:48.384699       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:22:48.384760       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:22:48.387058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:48.394507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:22:48.394702       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:22:48.394866       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-418191_79caea8a-e8c0-48e5-8546-6614ce9da2e5!
	I1122 00:22:48.394959       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3496c18-6ea3-455d-bd46-14aab2590703", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-418191_79caea8a-e8c0-48e5-8546-6614ce9da2e5 became leader
	W1122 00:22:48.397694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:48.402233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:22:48.495931       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-418191_79caea8a-e8c0-48e5-8546-6614ce9da2e5!
	W1122 00:22:50.405395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:50.409767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:52.413787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:52.418958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:54.422120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:54.426237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:56.429543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:56.435467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:58.444594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:58.452105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:23:00.458430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:23:00.466066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-418191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-418191
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-418191:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29",
	        "Created": "2025-11-22T00:21:39.243006846Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286501,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:21:39.282645044Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5906b22e872a17998ae88aee6d850484e7a99144e0db6afcf2c44a53e6042d4",
	        "ResolvConfPath": "/var/lib/docker/containers/9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29/hosts",
	        "LogPath": "/var/lib/docker/containers/9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29/9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29-json.log",
	        "Name": "/default-k8s-diff-port-418191",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-418191:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-418191",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9a27e7ed58ec3e9ea0a1110957fa30bf12fd03514075252b9adf1d3efd21ee29",
	                "LowerDir": "/var/lib/docker/overlay2/9de1604b03553063dfcf170c343940a020addaeb0de7f808c0b4ef93cd42252a-init/diff:/var/lib/docker/overlay2/4b4af9a4e857911a6b5096aeeaee227ee7577c6eff3b08bbb4e765c49ed2fb70/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9de1604b03553063dfcf170c343940a020addaeb0de7f808c0b4ef93cd42252a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9de1604b03553063dfcf170c343940a020addaeb0de7f808c0b4ef93cd42252a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9de1604b03553063dfcf170c343940a020addaeb0de7f808c0b4ef93cd42252a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-418191",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-418191/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-418191",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-418191",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-418191",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "399df505604566f4e6bbc893c23ad4cfed1ab125826174ac85943738d7cb9eb5",
	            "SandboxKey": "/var/run/docker/netns/399df5056045",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-418191": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0d513e2ffd1d12a091cc59d5c7402ad8012293f8237487230adf0f25b7f341f2",
	                    "EndpointID": "91edfe978c2330eb32ac9a65b6eea7e1d51813ffd41fe66fe1fc72b496547f7a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3a:25:39:0a:49:d7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-418191",
	                        "9a27e7ed58ec"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-418191 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-418191 logs -n 25: (1.20288807s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                     ARGS                                                                     │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p newest-cni-401244                                                                                                                         │ newest-cni-401244 │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 pgrep -a kubelet                                                                                                              │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ delete  │ -p newest-cni-401244                                                                                                                         │ newest-cni-401244 │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ start   │ -p calico-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd │ calico-687868     │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │                     │
	│ ssh     │ -p auto-687868 sudo cat /etc/nsswitch.conf                                                                                                   │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo cat /etc/hosts                                                                                                           │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo cat /etc/resolv.conf                                                                                                     │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo crictl pods                                                                                                              │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo crictl ps --all                                                                                                          │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                   │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo ip a s                                                                                                                   │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo ip r s                                                                                                                   │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo iptables-save                                                                                                            │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo iptables -t nat -L -n -v                                                                                                 │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo systemctl status kubelet --all --full --no-pager                                                                         │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo systemctl cat kubelet --no-pager                                                                                         │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:22 UTC │
	│ ssh     │ -p auto-687868 sudo journalctl -xeu kubelet --all --full --no-pager                                                                          │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:22 UTC │ 22 Nov 25 00:23 UTC │
	│ ssh     │ -p auto-687868 sudo cat /etc/kubernetes/kubelet.conf                                                                                         │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │ 22 Nov 25 00:23 UTC │
	│ ssh     │ -p auto-687868 sudo cat /var/lib/kubelet/config.yaml                                                                                         │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │ 22 Nov 25 00:23 UTC │
	│ ssh     │ -p auto-687868 sudo systemctl status docker --all --full --no-pager                                                                          │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │                     │
	│ ssh     │ -p auto-687868 sudo systemctl cat docker --no-pager                                                                                          │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │ 22 Nov 25 00:23 UTC │
	│ ssh     │ -p auto-687868 sudo cat /etc/docker/daemon.json                                                                                              │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │                     │
	│ ssh     │ -p auto-687868 sudo docker system info                                                                                                       │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │                     │
	│ ssh     │ -p auto-687868 sudo systemctl status cri-docker --all --full --no-pager                                                                      │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │                     │
	│ ssh     │ -p auto-687868 sudo systemctl cat cri-docker --no-pager                                                                                      │ auto-687868       │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:22:41
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:22:41.198696  307961 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:22:41.199013  307961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:22:41.199026  307961 out.go:374] Setting ErrFile to fd 2...
	I1122 00:22:41.199034  307961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:22:41.199351  307961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:22:41.199870  307961 out.go:368] Setting JSON to false
	I1122 00:22:41.201183  307961 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3900,"bootTime":1763767061,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:22:41.201274  307961 start.go:143] virtualization: kvm guest
	I1122 00:22:41.204209  307961 out.go:179] * [calico-687868] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:22:41.206278  307961 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:22:41.206269  307961 notify.go:221] Checking for updates...
	I1122 00:22:41.207610  307961 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:22:41.209161  307961 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:22:41.210719  307961 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1122 00:22:41.212423  307961 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:22:41.214119  307961 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:22:41.216446  307961 config.go:182] Loaded profile config "auto-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:22:41.216627  307961 config.go:182] Loaded profile config "default-k8s-diff-port-418191": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:22:41.216758  307961 config.go:182] Loaded profile config "kindnet-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:22:41.216877  307961 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:22:41.251704  307961 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:22:41.251819  307961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:22:41.327859  307961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:22:41.314830182 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:22:41.328008  307961 docker.go:319] overlay module found
	I1122 00:22:41.330242  307961 out.go:179] * Using the docker driver based on user configuration
	I1122 00:22:41.331527  307961 start.go:309] selected driver: docker
	I1122 00:22:41.331547  307961 start.go:930] validating driver "docker" against <nil>
	I1122 00:22:41.331564  307961 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:22:41.332436  307961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:22:41.420088  307961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:22:41.404588646 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:22:41.420914  307961 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:22:41.421273  307961 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:22:41.423335  307961 out.go:179] * Using Docker driver with root privileges
	I1122 00:22:41.424700  307961 cni.go:84] Creating CNI manager for "calico"
	I1122 00:22:41.424721  307961 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1122 00:22:41.424811  307961 start.go:353] cluster config:
	{Name:calico-687868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-687868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:22:41.429917  307961 out.go:179] * Starting "calico-687868" primary control-plane node in "calico-687868" cluster
	I1122 00:22:41.431320  307961 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:22:41.433303  307961 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:22:41.437495  307961 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:22:41.437549  307961 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1122 00:22:41.437562  307961 cache.go:65] Caching tarball of preloaded images
	I1122 00:22:41.437611  307961 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:22:41.437708  307961 preload.go:238] Found /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1122 00:22:41.437730  307961 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1122 00:22:41.437873  307961 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/config.json ...
	I1122 00:22:41.437904  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/config.json: {Name:mka7db926e97d6e5cdd43c81fe015b5df2c80b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:41.467913  307961 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:22:41.467940  307961 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:22:41.467963  307961 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:22:41.467995  307961 start.go:360] acquireMachinesLock for calico-687868: {Name:mke73cc1559133bd70447728d473e38271caed16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:22:41.468128  307961 start.go:364] duration metric: took 111.364µs to acquireMachinesLock for "calico-687868"
	I1122 00:22:41.468173  307961 start.go:93] Provisioning new machine with config: &{Name:calico-687868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-687868 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:22:41.468289  307961 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:22:40.482873  299730 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:22:40.487377  299730 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:22:40.487399  299730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:22:40.504169  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:22:40.748547  299730 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:22:40.748617  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:40.748804  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-687868 minikube.k8s.io/updated_at=2025_11_22T00_22_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=kindnet-687868 minikube.k8s.io/primary=true
	I1122 00:22:40.761806  299730 ops.go:34] apiserver oom_adj: -16
	I1122 00:22:40.830592  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:41.331384  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1122 00:22:38.641647  285133 node_ready.go:57] node "default-k8s-diff-port-418191" has "Ready":"False" status (will retry)
	W1122 00:22:41.139368  285133 node_ready.go:57] node "default-k8s-diff-port-418191" has "Ready":"False" status (will retry)
	W1122 00:22:43.139542  285133 node_ready.go:57] node "default-k8s-diff-port-418191" has "Ready":"False" status (will retry)
	I1122 00:22:41.831644  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:42.331497  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:42.831252  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:43.330665  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:43.831404  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:44.331503  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:44.830946  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:45.331531  299730 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:22:45.405321  299730 kubeadm.go:1114] duration metric: took 4.656751714s to wait for elevateKubeSystemPrivileges
	I1122 00:22:45.405369  299730 kubeadm.go:403] duration metric: took 15.55089436s to StartCluster
	I1122 00:22:45.405393  299730 settings.go:142] acquiring lock: {Name:mk1d60582df8b538e3c57bd1424924e717e0072a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:45.405471  299730 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:22:45.407722  299730 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/kubeconfig: {Name:mk1de43c606bf9b357397ed899e71eb19bad0265 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:45.409646  299730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:22:45.409655  299730 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:22:45.409774  299730 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:22:45.409865  299730 config.go:182] Loaded profile config "kindnet-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:22:45.409880  299730 addons.go:70] Setting storage-provisioner=true in profile "kindnet-687868"
	I1122 00:22:45.409903  299730 addons.go:70] Setting default-storageclass=true in profile "kindnet-687868"
	I1122 00:22:45.409923  299730 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-687868"
	I1122 00:22:45.409906  299730 addons.go:239] Setting addon storage-provisioner=true in "kindnet-687868"
	I1122 00:22:45.410022  299730 host.go:66] Checking if "kindnet-687868" exists ...
	I1122 00:22:45.410328  299730 cli_runner.go:164] Run: docker container inspect kindnet-687868 --format={{.State.Status}}
	I1122 00:22:45.410623  299730 cli_runner.go:164] Run: docker container inspect kindnet-687868 --format={{.State.Status}}
	I1122 00:22:45.451555  299730 out.go:179] * Verifying Kubernetes components...
	I1122 00:22:45.453192  299730 addons.go:239] Setting addon default-storageclass=true in "kindnet-687868"
	I1122 00:22:45.453237  299730 host.go:66] Checking if "kindnet-687868" exists ...
	I1122 00:22:45.453573  299730 cli_runner.go:164] Run: docker container inspect kindnet-687868 --format={{.State.Status}}
	I1122 00:22:45.469177  299730 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:22:41.472052  307961 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:22:41.472364  307961 start.go:159] libmachine.API.Create for "calico-687868" (driver="docker")
	I1122 00:22:41.472403  307961 client.go:173] LocalClient.Create starting
	I1122 00:22:41.472492  307961 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem
	I1122 00:22:41.472530  307961 main.go:143] libmachine: Decoding PEM data...
	I1122 00:22:41.472549  307961 main.go:143] libmachine: Parsing certificate...
	I1122 00:22:41.472600  307961 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem
	I1122 00:22:41.472621  307961 main.go:143] libmachine: Decoding PEM data...
	I1122 00:22:41.472631  307961 main.go:143] libmachine: Parsing certificate...
	I1122 00:22:41.473138  307961 cli_runner.go:164] Run: docker network inspect calico-687868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:22:41.497060  307961 cli_runner.go:211] docker network inspect calico-687868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:22:41.497162  307961 network_create.go:284] running [docker network inspect calico-687868] to gather additional debugging logs...
	I1122 00:22:41.497183  307961 cli_runner.go:164] Run: docker network inspect calico-687868
	W1122 00:22:41.517795  307961 cli_runner.go:211] docker network inspect calico-687868 returned with exit code 1
	I1122 00:22:41.517844  307961 network_create.go:287] error running [docker network inspect calico-687868]: docker network inspect calico-687868: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-687868 not found
	I1122 00:22:41.517876  307961 network_create.go:289] output of [docker network inspect calico-687868]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-687868 not found
	
	** /stderr **
	I1122 00:22:41.517990  307961 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:22:41.540513  307961 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1df6c22ede91 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:c7:f4:a5:24:54} reservation:<nil>}
	I1122 00:22:41.541666  307961 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7d48551462a8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:3b:0e:74:ee:57} reservation:<nil>}
	I1122 00:22:41.542695  307961 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c50004b7f5b6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:56:73:1e:0d:b7:11} reservation:<nil>}
	I1122 00:22:41.543501  307961 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f9eec8a10bd3 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:52:ca:94:eb:f4:44} reservation:<nil>}
	I1122 00:22:41.544363  307961 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-1f7376f93c90 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:2e:05:2e:4b:93:54} reservation:<nil>}
	I1122 00:22:41.545430  307961 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed5080}
	I1122 00:22:41.545458  307961 network_create.go:124] attempt to create docker network calico-687868 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1122 00:22:41.545511  307961 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-687868 calico-687868
	I1122 00:22:41.608428  307961 network_create.go:108] docker network calico-687868 192.168.94.0/24 created
	I1122 00:22:41.608468  307961 kic.go:121] calculated static IP "192.168.94.2" for the "calico-687868" container
	I1122 00:22:41.608545  307961 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:22:41.631287  307961 cli_runner.go:164] Run: docker volume create calico-687868 --label name.minikube.sigs.k8s.io=calico-687868 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:22:41.655689  307961 oci.go:103] Successfully created a docker volume calico-687868
	I1122 00:22:41.655805  307961 cli_runner.go:164] Run: docker run --rm --name calico-687868-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-687868 --entrypoint /usr/bin/test -v calico-687868:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:22:42.335620  307961 oci.go:107] Successfully prepared a docker volume calico-687868
	I1122 00:22:42.335703  307961 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:22:42.335721  307961 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:22:42.335806  307961 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-687868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:22:45.475485  299730 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:22:45.475511  299730 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:22:45.475576  299730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687868
	I1122 00:22:45.494021  299730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/kindnet-687868/id_rsa Username:docker}
	I1122 00:22:45.496762  299730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:22:45.598522  299730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:22:45.678485  299730 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:22:45.678512  299730 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:22:45.678588  299730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-687868
	I1122 00:22:45.697923  299730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/kindnet-687868/id_rsa Username:docker}
	I1122 00:22:45.798647  299730 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:22:45.902187  299730 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:22:45.902240  299730 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:22:46.892729  299730 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.094037317s)
	I1122 00:22:46.893517  299730 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1122 00:22:46.895549  299730 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1122 00:22:45.640061  285133 node_ready.go:57] node "default-k8s-diff-port-418191" has "Ready":"False" status (will retry)
	W1122 00:22:47.640108  285133 node_ready.go:57] node "default-k8s-diff-port-418191" has "Ready":"False" status (will retry)
	I1122 00:22:48.139130  285133 node_ready.go:49] node "default-k8s-diff-port-418191" is "Ready"
	I1122 00:22:48.139162  285133 node_ready.go:38] duration metric: took 42.503124681s for node "default-k8s-diff-port-418191" to be "Ready" ...
	I1122 00:22:48.139176  285133 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:22:48.139227  285133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:22:48.151643  285133 api_server.go:72] duration metric: took 42.905389347s to wait for apiserver process to appear ...
	I1122 00:22:48.151672  285133 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:22:48.151691  285133 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1122 00:22:48.155857  285133 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1122 00:22:48.156901  285133 api_server.go:141] control plane version: v1.34.1
	I1122 00:22:48.156945  285133 api_server.go:131] duration metric: took 5.266963ms to wait for apiserver health ...
	I1122 00:22:48.156954  285133 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:22:48.159992  285133 system_pods.go:59] 8 kube-system pods found
	I1122 00:22:48.160024  285133 system_pods.go:61] "coredns-66bc5c9577-nft87" [73b10676-5bd9-4c0b-8e69-ecfd1e7373a8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:22:48.160030  285133 system_pods.go:61] "etcd-default-k8s-diff-port-418191" [2a5b7ded-2579-4cf5-80f9-9d0a659edec9] Running
	I1122 00:22:48.160036  285133 system_pods.go:61] "kindnet-p88n8" [054d63f4-c84a-4d5e-9731-d8dd34464e73] Running
	I1122 00:22:48.160040  285133 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-418191" [5b0a0fe7-7d96-4fe7-a4ce-08607e8d04da] Running
	I1122 00:22:48.160044  285133 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-418191" [b42b00df-9c81-49e6-a85c-f2f9b64ebead] Running
	I1122 00:22:48.160048  285133 system_pods.go:61] "kube-proxy-xf4dv" [ba054583-7e23-479e-a042-2c8fdf7c7b0a] Running
	I1122 00:22:48.160051  285133 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-418191" [45d3a2e6-ee3f-43e2-acf8-ce35d15e187d] Running
	I1122 00:22:48.160058  285133 system_pods.go:61] "storage-provisioner" [7c87a520-6723-4298-9c0d-6bde0b15aec8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:22:48.160065  285133 system_pods.go:74] duration metric: took 3.104931ms to wait for pod list to return data ...
	I1122 00:22:48.160074  285133 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:22:48.162421  285133 default_sa.go:45] found service account: "default"
	I1122 00:22:48.162440  285133 default_sa.go:55] duration metric: took 2.360963ms for default service account to be created ...
	I1122 00:22:48.162448  285133 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:22:48.165172  285133 system_pods.go:86] 8 kube-system pods found
	I1122 00:22:48.165211  285133 system_pods.go:89] "coredns-66bc5c9577-nft87" [73b10676-5bd9-4c0b-8e69-ecfd1e7373a8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:22:48.165219  285133 system_pods.go:89] "etcd-default-k8s-diff-port-418191" [2a5b7ded-2579-4cf5-80f9-9d0a659edec9] Running
	I1122 00:22:48.165230  285133 system_pods.go:89] "kindnet-p88n8" [054d63f4-c84a-4d5e-9731-d8dd34464e73] Running
	I1122 00:22:48.165236  285133 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-418191" [5b0a0fe7-7d96-4fe7-a4ce-08607e8d04da] Running
	I1122 00:22:48.165248  285133 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-418191" [b42b00df-9c81-49e6-a85c-f2f9b64ebead] Running
	I1122 00:22:48.165267  285133 system_pods.go:89] "kube-proxy-xf4dv" [ba054583-7e23-479e-a042-2c8fdf7c7b0a] Running
	I1122 00:22:48.165273  285133 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-418191" [45d3a2e6-ee3f-43e2-acf8-ce35d15e187d] Running
	I1122 00:22:48.165284  285133 system_pods.go:89] "storage-provisioner" [7c87a520-6723-4298-9c0d-6bde0b15aec8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:22:48.165333  285133 retry.go:31] will retry after 264.651933ms: missing components: kube-dns
	I1122 00:22:48.434909  285133 system_pods.go:86] 8 kube-system pods found
	I1122 00:22:48.434942  285133 system_pods.go:89] "coredns-66bc5c9577-nft87" [73b10676-5bd9-4c0b-8e69-ecfd1e7373a8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:22:48.434948  285133 system_pods.go:89] "etcd-default-k8s-diff-port-418191" [2a5b7ded-2579-4cf5-80f9-9d0a659edec9] Running
	I1122 00:22:48.434954  285133 system_pods.go:89] "kindnet-p88n8" [054d63f4-c84a-4d5e-9731-d8dd34464e73] Running
	I1122 00:22:48.434957  285133 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-418191" [5b0a0fe7-7d96-4fe7-a4ce-08607e8d04da] Running
	I1122 00:22:48.434961  285133 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-418191" [b42b00df-9c81-49e6-a85c-f2f9b64ebead] Running
	I1122 00:22:48.434964  285133 system_pods.go:89] "kube-proxy-xf4dv" [ba054583-7e23-479e-a042-2c8fdf7c7b0a] Running
	I1122 00:22:48.434968  285133 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-418191" [45d3a2e6-ee3f-43e2-acf8-ce35d15e187d] Running
	I1122 00:22:48.434973  285133 system_pods.go:89] "storage-provisioner" [7c87a520-6723-4298-9c0d-6bde0b15aec8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:22:48.434988  285133 retry.go:31] will retry after 316.912492ms: missing components: kube-dns
	I1122 00:22:48.755843  285133 system_pods.go:86] 8 kube-system pods found
	I1122 00:22:48.755873  285133 system_pods.go:89] "coredns-66bc5c9577-nft87" [73b10676-5bd9-4c0b-8e69-ecfd1e7373a8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:22:48.755880  285133 system_pods.go:89] "etcd-default-k8s-diff-port-418191" [2a5b7ded-2579-4cf5-80f9-9d0a659edec9] Running
	I1122 00:22:48.755916  285133 system_pods.go:89] "kindnet-p88n8" [054d63f4-c84a-4d5e-9731-d8dd34464e73] Running
	I1122 00:22:48.755923  285133 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-418191" [5b0a0fe7-7d96-4fe7-a4ce-08607e8d04da] Running
	I1122 00:22:48.755927  285133 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-418191" [b42b00df-9c81-49e6-a85c-f2f9b64ebead] Running
	I1122 00:22:48.755931  285133 system_pods.go:89] "kube-proxy-xf4dv" [ba054583-7e23-479e-a042-2c8fdf7c7b0a] Running
	I1122 00:22:48.755936  285133 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-418191" [45d3a2e6-ee3f-43e2-acf8-ce35d15e187d] Running
	I1122 00:22:48.755941  285133 system_pods.go:89] "storage-provisioner" [7c87a520-6723-4298-9c0d-6bde0b15aec8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:22:48.755960  285133 retry.go:31] will retry after 482.915275ms: missing components: kube-dns
	I1122 00:22:49.243694  285133 system_pods.go:86] 8 kube-system pods found
	I1122 00:22:49.243723  285133 system_pods.go:89] "coredns-66bc5c9577-nft87" [73b10676-5bd9-4c0b-8e69-ecfd1e7373a8] Running
	I1122 00:22:49.243729  285133 system_pods.go:89] "etcd-default-k8s-diff-port-418191" [2a5b7ded-2579-4cf5-80f9-9d0a659edec9] Running
	I1122 00:22:49.243734  285133 system_pods.go:89] "kindnet-p88n8" [054d63f4-c84a-4d5e-9731-d8dd34464e73] Running
	I1122 00:22:49.243737  285133 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-418191" [5b0a0fe7-7d96-4fe7-a4ce-08607e8d04da] Running
	I1122 00:22:49.243742  285133 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-418191" [b42b00df-9c81-49e6-a85c-f2f9b64ebead] Running
	I1122 00:22:49.243747  285133 system_pods.go:89] "kube-proxy-xf4dv" [ba054583-7e23-479e-a042-2c8fdf7c7b0a] Running
	I1122 00:22:49.243752  285133 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-418191" [45d3a2e6-ee3f-43e2-acf8-ce35d15e187d] Running
	I1122 00:22:49.243757  285133 system_pods.go:89] "storage-provisioner" [7c87a520-6723-4298-9c0d-6bde0b15aec8] Running
	I1122 00:22:49.243768  285133 system_pods.go:126] duration metric: took 1.081311961s to wait for k8s-apps to be running ...
	I1122 00:22:49.243778  285133 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:22:49.243837  285133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:22:49.258559  285133 system_svc.go:56] duration metric: took 14.768187ms WaitForService to wait for kubelet
	I1122 00:22:49.258592  285133 kubeadm.go:587] duration metric: took 44.012343316s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:22:49.258616  285133 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:22:49.261959  285133 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:22:49.261986  285133 node_conditions.go:123] node cpu capacity is 8
	I1122 00:22:49.262012  285133 node_conditions.go:105] duration metric: took 3.390936ms to run NodePressure ...
	I1122 00:22:49.262026  285133 start.go:242] waiting for startup goroutines ...
	I1122 00:22:49.262039  285133 start.go:247] waiting for cluster config update ...
	I1122 00:22:49.262057  285133 start.go:256] writing updated cluster config ...
	I1122 00:22:49.262356  285133 ssh_runner.go:195] Run: rm -f paused
	I1122 00:22:49.266584  285133 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:22:49.270363  285133 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nft87" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.275064  285133 pod_ready.go:94] pod "coredns-66bc5c9577-nft87" is "Ready"
	I1122 00:22:49.275085  285133 pod_ready.go:86] duration metric: took 4.697539ms for pod "coredns-66bc5c9577-nft87" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.277096  285133 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.281295  285133 pod_ready.go:94] pod "etcd-default-k8s-diff-port-418191" is "Ready"
	I1122 00:22:49.281318  285133 pod_ready.go:86] duration metric: took 4.20058ms for pod "etcd-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.283153  285133 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.287048  285133 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-418191" is "Ready"
	I1122 00:22:49.287072  285133 pod_ready.go:86] duration metric: took 3.900916ms for pod "kube-apiserver-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.289212  285133 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.671236  285133 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-418191" is "Ready"
	I1122 00:22:49.671298  285133 pod_ready.go:86] duration metric: took 382.060516ms for pod "kube-controller-manager-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:49.871838  285133 pod_ready.go:83] waiting for pod "kube-proxy-xf4dv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:50.271197  285133 pod_ready.go:94] pod "kube-proxy-xf4dv" is "Ready"
	I1122 00:22:50.271225  285133 pod_ready.go:86] duration metric: took 399.35647ms for pod "kube-proxy-xf4dv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:50.471995  285133 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:50.871573  285133 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-418191" is "Ready"
	I1122 00:22:50.871605  285133 pod_ready.go:86] duration metric: took 399.582988ms for pod "kube-scheduler-default-k8s-diff-port-418191" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:50.871619  285133 pod_ready.go:40] duration metric: took 1.60500287s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:22:50.916847  285133 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:22:50.918772  285133 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-418191" cluster and "default" namespace by default
	I1122 00:22:46.900904  307961 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-687868:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.565014405s)
	I1122 00:22:46.900940  307961 kic.go:203] duration metric: took 4.565214758s to extract preloaded images to volume ...
	W1122 00:22:46.901050  307961 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1122 00:22:46.901116  307961 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1122 00:22:46.901169  307961 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:22:46.972085  307961 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-687868 --name calico-687868 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-687868 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-687868 --network calico-687868 --ip 192.168.94.2 --volume calico-687868:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:22:47.316449  307961 cli_runner.go:164] Run: docker container inspect calico-687868 --format={{.State.Running}}
	I1122 00:22:47.339111  307961 cli_runner.go:164] Run: docker container inspect calico-687868 --format={{.State.Status}}
	I1122 00:22:47.360854  307961 cli_runner.go:164] Run: docker exec calico-687868 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:22:47.408960  307961 oci.go:144] the created container "calico-687868" has a running status.
	I1122 00:22:47.408988  307961 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa...
	I1122 00:22:47.502047  307961 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:22:47.530508  307961 cli_runner.go:164] Run: docker container inspect calico-687868 --format={{.State.Status}}
	I1122 00:22:47.555115  307961 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:22:47.555144  307961 kic_runner.go:114] Args: [docker exec --privileged calico-687868 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:22:47.602377  307961 cli_runner.go:164] Run: docker container inspect calico-687868 --format={{.State.Status}}
	I1122 00:22:47.631356  307961 machine.go:94] provisionDockerMachine start ...
	I1122 00:22:47.631467  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:47.657568  307961 main.go:143] libmachine: Using SSH client type: native
	I1122 00:22:47.657897  307961 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1122 00:22:47.657915  307961 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:22:47.658528  307961 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49886->127.0.0.1:33118: read: connection reset by peer
	I1122 00:22:50.785488  307961 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-687868
	
	I1122 00:22:50.785520  307961 ubuntu.go:182] provisioning hostname "calico-687868"
	I1122 00:22:50.785594  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:50.804997  307961 main.go:143] libmachine: Using SSH client type: native
	I1122 00:22:50.805303  307961 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1122 00:22:50.805325  307961 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-687868 && echo "calico-687868" | sudo tee /etc/hostname
	I1122 00:22:50.945430  307961 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-687868
	
	I1122 00:22:50.945516  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:50.967847  307961 main.go:143] libmachine: Using SSH client type: native
	I1122 00:22:50.968131  307961 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1122 00:22:50.968174  307961 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-687868' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-687868/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-687868' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:22:51.096113  307961 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:22:51.096143  307961 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-9059/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-9059/.minikube}
	I1122 00:22:51.096177  307961 ubuntu.go:190] setting up certificates
	I1122 00:22:51.096190  307961 provision.go:84] configureAuth start
	I1122 00:22:51.096253  307961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-687868
	I1122 00:22:51.116714  307961 provision.go:143] copyHostCerts
	I1122 00:22:51.116784  307961 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem, removing ...
	I1122 00:22:51.116793  307961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem
	I1122 00:22:51.116879  307961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/ca.pem (1082 bytes)
	I1122 00:22:51.116992  307961 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem, removing ...
	I1122 00:22:51.117001  307961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem
	I1122 00:22:51.117030  307961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/cert.pem (1123 bytes)
	I1122 00:22:51.117103  307961 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem, removing ...
	I1122 00:22:51.117110  307961 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem
	I1122 00:22:51.117145  307961 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-9059/.minikube/key.pem (1679 bytes)
	I1122 00:22:51.117252  307961 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem org=jenkins.calico-687868 san=[127.0.0.1 192.168.94.2 calico-687868 localhost minikube]
	I1122 00:22:51.177363  307961 provision.go:177] copyRemoteCerts
	I1122 00:22:51.177423  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:22:51.177456  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:51.198362  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa Username:docker}
	I1122 00:22:46.896251  299730 node_ready.go:35] waiting up to 15m0s for node "kindnet-687868" to be "Ready" ...
	I1122 00:22:46.897843  299730 addons.go:530] duration metric: took 1.488060967s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1122 00:22:47.399318  299730 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-687868" context rescaled to 1 replicas
	W1122 00:22:48.899298  299730 node_ready.go:57] node "kindnet-687868" has "Ready":"False" status (will retry)
	W1122 00:22:51.399068  299730 node_ready.go:57] node "kindnet-687868" has "Ready":"False" status (will retry)
	I1122 00:22:51.295172  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:22:51.317014  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1122 00:22:51.335921  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:22:51.354877  307961 provision.go:87] duration metric: took 258.665412ms to configureAuth
	I1122 00:22:51.354913  307961 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:22:51.355076  307961 config.go:182] Loaded profile config "calico-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:22:51.355087  307961 machine.go:97] duration metric: took 3.723704226s to provisionDockerMachine
	I1122 00:22:51.355094  307961 client.go:176] duration metric: took 9.882685909s to LocalClient.Create
	I1122 00:22:51.355113  307961 start.go:167] duration metric: took 9.882752592s to libmachine.API.Create "calico-687868"
	I1122 00:22:51.355122  307961 start.go:293] postStartSetup for "calico-687868" (driver="docker")
	I1122 00:22:51.355131  307961 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:22:51.355184  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:22:51.355220  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:51.375710  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa Username:docker}
	I1122 00:22:51.473418  307961 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:22:51.477989  307961 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:22:51.478018  307961 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:22:51.478030  307961 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/addons for local assets ...
	I1122 00:22:51.478090  307961 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-9059/.minikube/files for local assets ...
	I1122 00:22:51.478199  307961 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem -> 145302.pem in /etc/ssl/certs
	I1122 00:22:51.478361  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:22:51.486766  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:22:51.510722  307961 start.go:296] duration metric: took 155.583338ms for postStartSetup
	I1122 00:22:51.511218  307961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-687868
	I1122 00:22:51.534970  307961 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/config.json ...
	I1122 00:22:51.535387  307961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:22:51.535441  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:51.555661  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa Username:docker}
	I1122 00:22:51.645695  307961 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:22:51.651079  307961 start.go:128] duration metric: took 10.182769215s to createHost
	I1122 00:22:51.651106  307961 start.go:83] releasing machines lock for "calico-687868", held for 10.182952096s
	I1122 00:22:51.651183  307961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-687868
	I1122 00:22:51.670945  307961 ssh_runner.go:195] Run: cat /version.json
	I1122 00:22:51.670997  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:51.671016  307961 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:22:51.671104  307961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-687868
	I1122 00:22:51.692940  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa Username:docker}
	I1122 00:22:51.693168  307961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/calico-687868/id_rsa Username:docker}
	I1122 00:22:51.844530  307961 ssh_runner.go:195] Run: systemctl --version
	I1122 00:22:51.851536  307961 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:22:51.857051  307961 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:22:51.857119  307961 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:22:51.884307  307961 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1122 00:22:51.884331  307961 start.go:496] detecting cgroup driver to use...
	I1122 00:22:51.884364  307961 detect.go:190] detected "systemd" cgroup driver on host os
	I1122 00:22:51.884417  307961 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:22:51.901060  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:22:51.916496  307961 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:22:51.916561  307961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:22:51.934812  307961 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:22:51.954675  307961 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:22:52.040868  307961 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:22:52.133710  307961 docker.go:234] disabling docker service ...
	I1122 00:22:52.133770  307961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:22:52.153908  307961 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:22:52.167469  307961 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:22:52.253225  307961 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:22:52.333598  307961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:22:52.346545  307961 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:22:52.362241  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:22:52.373432  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:22:52.383542  307961 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1122 00:22:52.383609  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1122 00:22:52.393239  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:22:52.403856  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:22:52.415204  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:22:52.426975  307961 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:22:52.437746  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:22:52.447374  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:22:52.456669  307961 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:22:52.466513  307961 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:22:52.474424  307961 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:22:52.482547  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:22:52.565198  307961 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:22:52.667997  307961 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:22:52.668074  307961 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:22:52.672213  307961 start.go:564] Will wait 60s for crictl version
	I1122 00:22:52.672295  307961 ssh_runner.go:195] Run: which crictl
	I1122 00:22:52.676244  307961 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:22:52.705563  307961 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:22:52.705639  307961 ssh_runner.go:195] Run: containerd --version
	I1122 00:22:52.727577  307961 ssh_runner.go:195] Run: containerd --version
	I1122 00:22:52.752328  307961 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1122 00:22:52.753949  307961 cli_runner.go:164] Run: docker network inspect calico-687868 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:22:52.774961  307961 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1122 00:22:52.779758  307961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:22:52.791083  307961 kubeadm.go:884] updating cluster {Name:calico-687868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-687868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:22:52.791246  307961 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:22:52.791324  307961 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:22:52.817072  307961 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:22:52.817096  307961 containerd.go:534] Images already preloaded, skipping extraction
	I1122 00:22:52.817153  307961 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:22:52.842371  307961 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:22:52.842395  307961 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:22:52.842402  307961 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 containerd true true} ...
	I1122 00:22:52.842488  307961 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-687868 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-687868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1122 00:22:52.842546  307961 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:22:52.870293  307961 cni.go:84] Creating CNI manager for "calico"
	I1122 00:22:52.870324  307961 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:22:52.870347  307961 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-687868 NodeName:calico-687868 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:22:52.870470  307961 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-687868"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:22:52.870535  307961 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:22:52.879365  307961 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:22:52.879445  307961 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:22:52.887603  307961 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1122 00:22:52.902654  307961 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:22:52.919678  307961 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1122 00:22:52.933422  307961 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:22:52.937512  307961 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:22:52.948245  307961 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:22:53.032432  307961 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:22:53.056807  307961 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868 for IP: 192.168.94.2
	I1122 00:22:53.056828  307961 certs.go:195] generating shared ca certs ...
	I1122 00:22:53.056843  307961 certs.go:227] acquiring lock for ca certs: {Name:mkcee17f48cab2703d4de8a78a6fb8af44d9e7e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.057053  307961 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.key
	I1122 00:22:53.057111  307961 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.key
	I1122 00:22:53.057133  307961 certs.go:257] generating profile certs ...
	I1122 00:22:53.057219  307961 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/client.key
	I1122 00:22:53.057243  307961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/client.crt with IP's: []
	I1122 00:22:53.088755  307961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/client.crt ...
	I1122 00:22:53.088785  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/client.crt: {Name:mk84add15339a60b5ccef24fe9963e725101e6f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.088964  307961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/client.key ...
	I1122 00:22:53.088978  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/client.key: {Name:mka7fa3bf79a958c62b6bebc82776d155235b2be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.089091  307961 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.key.0cb251a4
	I1122 00:22:53.089108  307961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.crt.0cb251a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1122 00:22:53.157807  307961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.crt.0cb251a4 ...
	I1122 00:22:53.157834  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.crt.0cb251a4: {Name:mkf02a48ab788e195a706993334f63af02ad209f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.157993  307961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.key.0cb251a4 ...
	I1122 00:22:53.158006  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.key.0cb251a4: {Name:mk5b5739150ad75a952c110a888cc20869247d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.158077  307961 certs.go:382] copying /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.crt.0cb251a4 -> /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.crt
	I1122 00:22:53.158160  307961 certs.go:386] copying /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.key.0cb251a4 -> /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.key
	I1122 00:22:53.158222  307961 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.key
	I1122 00:22:53.158237  307961 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.crt with IP's: []
	I1122 00:22:53.315075  307961 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.crt ...
	I1122 00:22:53.315105  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.crt: {Name:mk5bd4e0fee11dec51d745641c9157754959840a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.315303  307961 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.key ...
	I1122 00:22:53.315318  307961 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.key: {Name:mke0af0fbf2534370a5a96beb58d05ea10807b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:22:53.315504  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530.pem (1338 bytes)
	W1122 00:22:53.315544  307961 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530_empty.pem, impossibly tiny 0 bytes
	I1122 00:22:53.315554  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca-key.pem (1675 bytes)
	I1122 00:22:53.315579  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/ca.pem (1082 bytes)
	I1122 00:22:53.315609  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:22:53.315632  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/certs/key.pem (1679 bytes)
	I1122 00:22:53.315674  307961 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem (1708 bytes)
	I1122 00:22:53.316303  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:22:53.335675  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:22:53.353713  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:22:53.371975  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1122 00:22:53.390816  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1122 00:22:53.409886  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:22:53.429071  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:22:53.448549  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/calico-687868/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:22:53.467981  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/certs/14530.pem --> /usr/share/ca-certificates/14530.pem (1338 bytes)
	I1122 00:22:53.490504  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/ssl/certs/145302.pem --> /usr/share/ca-certificates/145302.pem (1708 bytes)
	I1122 00:22:53.512350  307961 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:22:53.534724  307961 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:22:53.551641  307961 ssh_runner.go:195] Run: openssl version
	I1122 00:22:53.558904  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:22:53.570019  307961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:22:53.575175  307961 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:47 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:22:53.575238  307961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:22:53.614924  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:22:53.624473  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14530.pem && ln -fs /usr/share/ca-certificates/14530.pem /etc/ssl/certs/14530.pem"
	I1122 00:22:53.635695  307961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14530.pem
	I1122 00:22:53.640743  307961 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:52 /usr/share/ca-certificates/14530.pem
	I1122 00:22:53.640812  307961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14530.pem
	I1122 00:22:53.676075  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14530.pem /etc/ssl/certs/51391683.0"
	I1122 00:22:53.685329  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/145302.pem && ln -fs /usr/share/ca-certificates/145302.pem /etc/ssl/certs/145302.pem"
	I1122 00:22:53.695018  307961 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/145302.pem
	I1122 00:22:53.699097  307961 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:52 /usr/share/ca-certificates/145302.pem
	I1122 00:22:53.699164  307961 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/145302.pem
	I1122 00:22:53.736102  307961 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/145302.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:22:53.745095  307961 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:22:53.748884  307961 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:22:53.748988  307961 kubeadm.go:401] StartCluster: {Name:calico-687868 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-687868 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:22:53.749092  307961 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:22:53.749146  307961 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:22:53.777622  307961 cri.go:89] found id: ""
	I1122 00:22:53.777698  307961 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:22:53.786436  307961 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:22:53.795001  307961 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:22:53.795065  307961 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:22:53.803020  307961 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:22:53.803038  307961 kubeadm.go:158] found existing configuration files:
	
	I1122 00:22:53.803099  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:22:53.812014  307961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:22:53.812073  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:22:53.820127  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:22:53.828878  307961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:22:53.828945  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:22:53.838059  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:22:53.847366  307961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:22:53.847436  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:22:53.856322  307961 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:22:53.865065  307961 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:22:53.865134  307961 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:22:53.872822  307961 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:22:53.935133  307961 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1122 00:22:53.997562  307961 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1122 00:22:53.399522  299730 node_ready.go:57] node "kindnet-687868" has "Ready":"False" status (will retry)
	W1122 00:22:55.400183  299730 node_ready.go:57] node "kindnet-687868" has "Ready":"False" status (will retry)
	W1122 00:22:57.899281  299730 node_ready.go:57] node "kindnet-687868" has "Ready":"False" status (will retry)
	I1122 00:22:58.401287  299730 node_ready.go:49] node "kindnet-687868" is "Ready"
	I1122 00:22:58.401321  299730 node_ready.go:38] duration metric: took 11.505025696s for node "kindnet-687868" to be "Ready" ...
	I1122 00:22:58.401337  299730 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:22:58.401391  299730 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:22:58.418574  299730 api_server.go:72] duration metric: took 13.008879117s to wait for apiserver process to appear ...
	I1122 00:22:58.418601  299730 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:22:58.418628  299730 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:22:58.425007  299730 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:22:58.426141  299730 api_server.go:141] control plane version: v1.34.1
	I1122 00:22:58.426176  299730 api_server.go:131] duration metric: took 7.566963ms to wait for apiserver health ...
	I1122 00:22:58.426188  299730 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:22:58.432540  299730 system_pods.go:59] 8 kube-system pods found
	I1122 00:22:58.432583  299730 system_pods.go:61] "coredns-66bc5c9577-2gbmj" [aa1f8839-2acb-4320-afbb-17ab17befbd2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:22:58.432590  299730 system_pods.go:61] "etcd-kindnet-687868" [265a9137-0b2e-49d3-b93a-6c1a51704cc0] Running
	I1122 00:22:58.432599  299730 system_pods.go:61] "kindnet-v55v2" [a2d5e881-4bdc-4d1a-a746-6a95bbbdcb73] Running
	I1122 00:22:58.432605  299730 system_pods.go:61] "kube-apiserver-kindnet-687868" [e8346bb9-5bcb-4faf-bb73-54aedb9b352b] Running
	I1122 00:22:58.432611  299730 system_pods.go:61] "kube-controller-manager-kindnet-687868" [13d2f39b-29c5-4d87-b0ec-171922e2206a] Running
	I1122 00:22:58.432617  299730 system_pods.go:61] "kube-proxy-mhflf" [e992b4da-e052-47e9-98ae-56f362a83a6b] Running
	I1122 00:22:58.432621  299730 system_pods.go:61] "kube-scheduler-kindnet-687868" [3a6856c6-1aa1-4f88-b7aa-9787b3a96526] Running
	I1122 00:22:58.432629  299730 system_pods.go:61] "storage-provisioner" [c7a3c643-9089-44d0-862b-6d934c1a1bfb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:22:58.432638  299730 system_pods.go:74] duration metric: took 6.441918ms to wait for pod list to return data ...
	I1122 00:22:58.432649  299730 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:22:58.442130  299730 default_sa.go:45] found service account: "default"
	I1122 00:22:58.442163  299730 default_sa.go:55] duration metric: took 9.506694ms for default service account to be created ...
	I1122 00:22:58.442175  299730 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:22:58.462285  299730 system_pods.go:86] 8 kube-system pods found
	I1122 00:22:58.462326  299730 system_pods.go:89] "coredns-66bc5c9577-2gbmj" [aa1f8839-2acb-4320-afbb-17ab17befbd2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:22:58.462334  299730 system_pods.go:89] "etcd-kindnet-687868" [265a9137-0b2e-49d3-b93a-6c1a51704cc0] Running
	I1122 00:22:58.462342  299730 system_pods.go:89] "kindnet-v55v2" [a2d5e881-4bdc-4d1a-a746-6a95bbbdcb73] Running
	I1122 00:22:58.462348  299730 system_pods.go:89] "kube-apiserver-kindnet-687868" [e8346bb9-5bcb-4faf-bb73-54aedb9b352b] Running
	I1122 00:22:58.462354  299730 system_pods.go:89] "kube-controller-manager-kindnet-687868" [13d2f39b-29c5-4d87-b0ec-171922e2206a] Running
	I1122 00:22:58.462359  299730 system_pods.go:89] "kube-proxy-mhflf" [e992b4da-e052-47e9-98ae-56f362a83a6b] Running
	I1122 00:22:58.462364  299730 system_pods.go:89] "kube-scheduler-kindnet-687868" [3a6856c6-1aa1-4f88-b7aa-9787b3a96526] Running
	I1122 00:22:58.462371  299730 system_pods.go:89] "storage-provisioner" [c7a3c643-9089-44d0-862b-6d934c1a1bfb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:22:58.462398  299730 retry.go:31] will retry after 307.241941ms: missing components: kube-dns
	I1122 00:22:58.774365  299730 system_pods.go:86] 8 kube-system pods found
	I1122 00:22:58.774406  299730 system_pods.go:89] "coredns-66bc5c9577-2gbmj" [aa1f8839-2acb-4320-afbb-17ab17befbd2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:22:58.774416  299730 system_pods.go:89] "etcd-kindnet-687868" [265a9137-0b2e-49d3-b93a-6c1a51704cc0] Running
	I1122 00:22:58.774432  299730 system_pods.go:89] "kindnet-v55v2" [a2d5e881-4bdc-4d1a-a746-6a95bbbdcb73] Running
	I1122 00:22:58.774439  299730 system_pods.go:89] "kube-apiserver-kindnet-687868" [e8346bb9-5bcb-4faf-bb73-54aedb9b352b] Running
	I1122 00:22:58.774446  299730 system_pods.go:89] "kube-controller-manager-kindnet-687868" [13d2f39b-29c5-4d87-b0ec-171922e2206a] Running
	I1122 00:22:58.774462  299730 system_pods.go:89] "kube-proxy-mhflf" [e992b4da-e052-47e9-98ae-56f362a83a6b] Running
	I1122 00:22:58.774468  299730 system_pods.go:89] "kube-scheduler-kindnet-687868" [3a6856c6-1aa1-4f88-b7aa-9787b3a96526] Running
	I1122 00:22:58.774476  299730 system_pods.go:89] "storage-provisioner" [c7a3c643-9089-44d0-862b-6d934c1a1bfb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:22:58.774500  299730 retry.go:31] will retry after 378.171731ms: missing components: kube-dns
	I1122 00:22:59.157835  299730 system_pods.go:86] 8 kube-system pods found
	I1122 00:22:59.157865  299730 system_pods.go:89] "coredns-66bc5c9577-2gbmj" [aa1f8839-2acb-4320-afbb-17ab17befbd2] Running
	I1122 00:22:59.157870  299730 system_pods.go:89] "etcd-kindnet-687868" [265a9137-0b2e-49d3-b93a-6c1a51704cc0] Running
	I1122 00:22:59.157874  299730 system_pods.go:89] "kindnet-v55v2" [a2d5e881-4bdc-4d1a-a746-6a95bbbdcb73] Running
	I1122 00:22:59.157878  299730 system_pods.go:89] "kube-apiserver-kindnet-687868" [e8346bb9-5bcb-4faf-bb73-54aedb9b352b] Running
	I1122 00:22:59.157881  299730 system_pods.go:89] "kube-controller-manager-kindnet-687868" [13d2f39b-29c5-4d87-b0ec-171922e2206a] Running
	I1122 00:22:59.157886  299730 system_pods.go:89] "kube-proxy-mhflf" [e992b4da-e052-47e9-98ae-56f362a83a6b] Running
	I1122 00:22:59.157889  299730 system_pods.go:89] "kube-scheduler-kindnet-687868" [3a6856c6-1aa1-4f88-b7aa-9787b3a96526] Running
	I1122 00:22:59.157894  299730 system_pods.go:89] "storage-provisioner" [c7a3c643-9089-44d0-862b-6d934c1a1bfb] Running
	I1122 00:22:59.157901  299730 system_pods.go:126] duration metric: took 715.720256ms to wait for k8s-apps to be running ...
	I1122 00:22:59.157910  299730 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:22:59.157971  299730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:22:59.172411  299730 system_svc.go:56] duration metric: took 14.488865ms WaitForService to wait for kubelet
	I1122 00:22:59.172446  299730 kubeadm.go:587] duration metric: took 13.762754133s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:22:59.172468  299730 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:22:59.175830  299730 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1122 00:22:59.175863  299730 node_conditions.go:123] node cpu capacity is 8
	I1122 00:22:59.175883  299730 node_conditions.go:105] duration metric: took 3.408866ms to run NodePressure ...
	I1122 00:22:59.175900  299730 start.go:242] waiting for startup goroutines ...
	I1122 00:22:59.175913  299730 start.go:247] waiting for cluster config update ...
	I1122 00:22:59.175930  299730 start.go:256] writing updated cluster config ...
	I1122 00:22:59.176241  299730 ssh_runner.go:195] Run: rm -f paused
	I1122 00:22:59.180178  299730 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:22:59.184485  299730 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2gbmj" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:59.189836  299730 pod_ready.go:94] pod "coredns-66bc5c9577-2gbmj" is "Ready"
	I1122 00:22:59.189865  299730 pod_ready.go:86] duration metric: took 5.3522ms for pod "coredns-66bc5c9577-2gbmj" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:59.192193  299730 pod_ready.go:83] waiting for pod "etcd-kindnet-687868" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:59.196438  299730 pod_ready.go:94] pod "etcd-kindnet-687868" is "Ready"
	I1122 00:22:59.196464  299730 pod_ready.go:86] duration metric: took 4.247291ms for pod "etcd-kindnet-687868" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:59.198581  299730 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-687868" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:59.202709  299730 pod_ready.go:94] pod "kube-apiserver-kindnet-687868" is "Ready"
	I1122 00:22:59.202736  299730 pod_ready.go:86] duration metric: took 4.12768ms for pod "kube-apiserver-kindnet-687868" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:59.204985  299730 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-687868" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:59.585099  299730 pod_ready.go:94] pod "kube-controller-manager-kindnet-687868" is "Ready"
	I1122 00:22:59.585125  299730 pod_ready.go:86] duration metric: took 380.115682ms for pod "kube-controller-manager-kindnet-687868" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:22:59.785298  299730 pod_ready.go:83] waiting for pod "kube-proxy-mhflf" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:23:00.186023  299730 pod_ready.go:94] pod "kube-proxy-mhflf" is "Ready"
	I1122 00:23:00.186099  299730 pod_ready.go:86] duration metric: took 400.771242ms for pod "kube-proxy-mhflf" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:23:00.385828  299730 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-687868" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:23:00.786874  299730 pod_ready.go:94] pod "kube-scheduler-kindnet-687868" is "Ready"
	I1122 00:23:00.786910  299730 pod_ready.go:86] duration metric: took 401.052938ms for pod "kube-scheduler-kindnet-687868" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:23:00.786929  299730 pod_ready.go:40] duration metric: took 1.606715408s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:23:00.848436  299730 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1122 00:23:00.850395  299730 out.go:179] * Done! kubectl is now configured to use "kindnet-687868" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	8cb631d796c4a       56cc512116c8f       10 seconds ago       Running             busybox                   0                   bd7fc0ae3a9e4       busybox                                                default
	8af2857f48115       52546a367cc9e       15 seconds ago       Running             coredns                   0                   acd81cf74652c       coredns-66bc5c9577-nft87                               kube-system
	1c7a5352ca64e       6e38f40d628db       15 seconds ago       Running             storage-provisioner       0                   7a69de2d42319       storage-provisioner                                    kube-system
	a69c9c4b16631       fc25172553d79       56 seconds ago       Running             kube-proxy                0                   9cb31bdfbde51       kube-proxy-xf4dv                                       kube-system
	f5fede7e17e78       409467f978b4a       56 seconds ago       Running             kindnet-cni               0                   829a2beb85956       kindnet-p88n8                                          kube-system
	ae1b0ba64f1c9       c80c8dbafe7dd       About a minute ago   Running             kube-controller-manager   0                   f5c3464f944b6       kube-controller-manager-default-k8s-diff-port-418191   kube-system
	c3a7e2e1d4e18       7dd6aaa1717ab       About a minute ago   Running             kube-scheduler            0                   db6014bcc06c7       kube-scheduler-default-k8s-diff-port-418191            kube-system
	c8c0e9532df5c       5f1f5298c888d       About a minute ago   Running             etcd                      0                   a3332475a9e1b       etcd-default-k8s-diff-port-418191                      kube-system
	e3c5f6695cef6       c3994bc696102       About a minute ago   Running             kube-apiserver            0                   fc6387175ded9       kube-apiserver-default-k8s-diff-port-418191            kube-system
	
	
	==> containerd <==
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.299034299Z" level=info msg="CreateContainer within sandbox \"7a69de2d42319c13c183b73ad653be56f5f6977a2b22587a8f007f92cdfece1d\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"1c7a5352ca64ea1fd8d75825ba8b5b9b860a37d4a5ebd0314369cfebccf8a213\""
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.300078781Z" level=info msg="StartContainer for \"1c7a5352ca64ea1fd8d75825ba8b5b9b860a37d4a5ebd0314369cfebccf8a213\""
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.300757825Z" level=info msg="Container 8af2857f4811594e175fd3972595724d8e79fb6f60717af2ac2a5679742a45c8: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.301699225Z" level=info msg="connecting to shim 1c7a5352ca64ea1fd8d75825ba8b5b9b860a37d4a5ebd0314369cfebccf8a213" address="unix:///run/containerd/s/1332691ed35dbde5a1e5ae76501320189df47d59d0c196e04b91786f4272bf3b" protocol=ttrpc version=3
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.307894511Z" level=info msg="CreateContainer within sandbox \"acd81cf74652c5e8ad0fb65874ed2794660f7fd697c3cec3e458fe04f75fa31c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8af2857f4811594e175fd3972595724d8e79fb6f60717af2ac2a5679742a45c8\""
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.308530378Z" level=info msg="StartContainer for \"8af2857f4811594e175fd3972595724d8e79fb6f60717af2ac2a5679742a45c8\""
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.309701335Z" level=info msg="connecting to shim 8af2857f4811594e175fd3972595724d8e79fb6f60717af2ac2a5679742a45c8" address="unix:///run/containerd/s/8d11fb0aaa540c858a179d9068f0f438256decff6b9ed79d10fe3cd496768944" protocol=ttrpc version=3
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.364574911Z" level=info msg="StartContainer for \"1c7a5352ca64ea1fd8d75825ba8b5b9b860a37d4a5ebd0314369cfebccf8a213\" returns successfully"
	Nov 22 00:22:48 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:48.369393687Z" level=info msg="StartContainer for \"8af2857f4811594e175fd3972595724d8e79fb6f60717af2ac2a5679742a45c8\" returns successfully"
	Nov 22 00:22:51 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:51.392957281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:62d761e0-90e8-4ae1-98f2-3a0febcc01d1,Namespace:default,Attempt:0,}"
	Nov 22 00:22:51 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:51.434296758Z" level=info msg="connecting to shim bd7fc0ae3a9e449b83a429aeda2086d0ab08ea3249db59e600d179c219dc212d" address="unix:///run/containerd/s/d3afb5789e08cd77833d2528991c71e11fd7244542185b7ad39ea967e5ea400d" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:22:51 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:51.504968867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:62d761e0-90e8-4ae1-98f2-3a0febcc01d1,Namespace:default,Attempt:0,} returns sandbox id \"bd7fc0ae3a9e449b83a429aeda2086d0ab08ea3249db59e600d179c219dc212d\""
	Nov 22 00:22:51 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:51.507193613Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.555399473Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.556351066Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=2396646"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.557790325Z" level=info msg="ImageCreate event name:\"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.559802719Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.560515407Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"2395207\" in 2.053268118s"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.560558901Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c\""
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.565142218Z" level=info msg="CreateContainer within sandbox \"bd7fc0ae3a9e449b83a429aeda2086d0ab08ea3249db59e600d179c219dc212d\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.573524922Z" level=info msg="Container 8cb631d796c4a8d910dd262320e37f2f12249d644a633c4386a571a9ecb52570: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.583735418Z" level=info msg="CreateContainer within sandbox \"bd7fc0ae3a9e449b83a429aeda2086d0ab08ea3249db59e600d179c219dc212d\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8cb631d796c4a8d910dd262320e37f2f12249d644a633c4386a571a9ecb52570\""
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.584467142Z" level=info msg="StartContainer for \"8cb631d796c4a8d910dd262320e37f2f12249d644a633c4386a571a9ecb52570\""
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.585490347Z" level=info msg="connecting to shim 8cb631d796c4a8d910dd262320e37f2f12249d644a633c4386a571a9ecb52570" address="unix:///run/containerd/s/d3afb5789e08cd77833d2528991c71e11fd7244542185b7ad39ea967e5ea400d" protocol=ttrpc version=3
	Nov 22 00:22:53 default-k8s-diff-port-418191 containerd[664]: time="2025-11-22T00:22:53.641010933Z" level=info msg="StartContainer for \"8cb631d796c4a8d910dd262320e37f2f12249d644a633c4386a571a9ecb52570\" returns successfully"
	
	
	==> coredns [8af2857f4811594e175fd3972595724d8e79fb6f60717af2ac2a5679742a45c8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49634 - 16478 "HINFO IN 3208574080046988828.3383671134030109306. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067175049s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-418191
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-418191
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=default-k8s-diff-port-418191
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_22_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:21:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-418191
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:23:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:23:01 +0000   Sat, 22 Nov 2025 00:21:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:23:01 +0000   Sat, 22 Nov 2025 00:21:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:23:01 +0000   Sat, 22 Nov 2025 00:21:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:23:01 +0000   Sat, 22 Nov 2025 00:22:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-418191
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 5665009e93b91d39dc05718b691e3875
	  System UUID:                31190414-abfb-47c3-96b3-6eea69cb23df
	  Boot ID:                    725aae03-f893-4e0b-b029-cbd3b00ccfdd
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-nft87                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     58s
	  kube-system                 etcd-default-k8s-diff-port-418191                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         63s
	  kube-system                 kindnet-p88n8                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-default-k8s-diff-port-418191             250m (3%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-418191    200m (2%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-xf4dv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-default-k8s-diff-port-418191             100m (1%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-418191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-418191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x7 over 71s)  kubelet          Node default-k8s-diff-port-418191 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  63s                kubelet          Node default-k8s-diff-port-418191 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s                kubelet          Node default-k8s-diff-port-418191 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s                kubelet          Node default-k8s-diff-port-418191 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           59s                node-controller  Node default-k8s-diff-port-418191 event: Registered Node default-k8s-diff-port-418191 in Controller
	  Normal  NodeReady                16s                kubelet          Node default-k8s-diff-port-418191 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000865] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.087013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.410276] i8042: Warning: Keylock active
	[  +0.014947] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.495836] block sda: the capability attribute has been deprecated.
	[  +0.091740] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024333] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.452540] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [c8c0e9532df5ca107cdb178f44b3fc3459121a60dedba47feec5fad7f04c4c90] <==
	{"level":"warn","ts":"2025-11-22T00:21:56.693635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.701180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.710724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.718404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.726916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.735205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.742418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.749783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.757387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.765959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.773942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.790194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.797568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.813420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.822071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.830168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:21:56.901594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:22:03.123274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.924086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" limit:1 ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2025-11-22T00:22:03.123381Z","caller":"traceutil/trace.go:172","msg":"trace[180139130] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/generic-garbage-collector; range_end:; response_count:1; response_revision:326; }","duration":"108.061525ms","start":"2025-11-22T00:22:03.015297Z","end":"2025-11-22T00:22:03.123358Z","steps":["trace[180139130] 'range keys from in-memory index tree'  (duration: 107.765304ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:22:03.417201Z","caller":"traceutil/trace.go:172","msg":"trace[1627833544] transaction","detail":"{read_only:false; response_revision:327; number_of_response:1; }","duration":"101.297202ms","start":"2025-11-22T00:22:03.315885Z","end":"2025-11-22T00:22:03.417183Z","steps":["trace[1627833544] 'process raft request'  (duration: 101.182062ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:22:03.664903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.792131ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-11-22T00:22:03.664973Z","caller":"traceutil/trace.go:172","msg":"trace[2125039577] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:327; }","duration":"198.897465ms","start":"2025-11-22T00:22:03.466061Z","end":"2025-11-22T00:22:03.664958Z","steps":["trace[2125039577] 'range keys from in-memory index tree'  (duration: 198.627516ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-22T00:22:03.875435Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.54148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-11-22T00:22:03.875515Z","caller":"traceutil/trace.go:172","msg":"trace[507791274] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:328; }","duration":"109.634279ms","start":"2025-11-22T00:22:03.765866Z","end":"2025-11-22T00:22:03.875500Z","steps":["trace[507791274] 'range keys from in-memory index tree'  (duration: 109.395481ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:22:04.002519Z","caller":"traceutil/trace.go:172","msg":"trace[164048686] transaction","detail":"{read_only:false; response_revision:329; number_of_response:1; }","duration":"123.699527ms","start":"2025-11-22T00:22:03.878798Z","end":"2025-11-22T00:22:04.002498Z","steps":["trace[164048686] 'process raft request'  (duration: 123.575665ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:23:03 up  1:05,  0 user,  load average: 5.82, 4.17, 2.66
	Linux default-k8s-diff-port-418191 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f5fede7e17e78030ced629bfd5f2ba2e1dd6d87907e9ede26582b5d1a6cf0f01] <==
	I1122 00:22:07.438950       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:22:07.439213       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1122 00:22:07.439421       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:22:07.439444       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:22:07.439471       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:22:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:22:07.834011       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:22:07.834092       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:22:07.834112       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:22:07.834328       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:22:37.737066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:22:37.737066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1122 00:22:37.737095       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:22:37.737116       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1122 00:22:39.234994       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:22:39.235035       1 metrics.go:72] Registering metrics
	I1122 00:22:39.235100       1 controller.go:711] "Syncing nftables rules"
	I1122 00:22:47.743345       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1122 00:22:47.743425       1 main.go:301] handling current node
	I1122 00:22:57.738515       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1122 00:22:57.738561       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e3c5f6695cef635b978ec2213007737e56d02278c0a48da875ac581bafc81526] <==
	I1122 00:21:57.479119       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:21:57.479156       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1122 00:21:57.489202       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:21:57.490242       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:21:57.499549       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:21:57.500744       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:21:57.673524       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:21:58.377322       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:21:58.381983       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:21:58.382002       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:21:58.994599       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:21:59.035954       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:21:59.186377       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:21:59.193008       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1122 00:21:59.194302       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:21:59.198757       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:21:59.419323       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:22:00.091661       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:22:00.109953       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:22:00.118448       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:22:04.424672       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:22:04.430411       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:22:05.074551       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:22:05.328099       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1122 00:23:00.213695       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:42548: use of closed network connection
	
	
	==> kube-controller-manager [ae1b0ba64f1c92f88c5ebbaa061488c685e44c2a1a90f6645307c306d0c8c6f5] <==
	I1122 00:22:04.418886       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:22:04.418910       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:22:04.419438       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:22:04.419512       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:22:04.419686       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:22:04.423021       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:22:04.426629       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:22:04.426745       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:22:04.428920       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:22:04.428962       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:22:04.428970       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:22:04.431525       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:22:04.441394       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:22:04.441485       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:22:04.445620       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:22:04.448708       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:22:04.456991       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:22:04.465347       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:22:04.467071       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:22:04.468083       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:22:04.468386       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:22:04.469303       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:22:04.470544       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:22:04.470554       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:22:49.414946       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a69c9c4b166317f37104328a28376df7c7031e617c3f84dd468678fe6713add6] <==
	I1122 00:22:07.523431       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:22:07.581555       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:22:07.681740       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:22:07.681795       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1122 00:22:07.681945       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:22:07.708325       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:22:07.708399       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:22:07.716033       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:22:07.716632       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:22:07.716661       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:22:07.718326       1 config.go:200] "Starting service config controller"
	I1122 00:22:07.718352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:22:07.718434       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:22:07.718450       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:22:07.718516       1 config.go:309] "Starting node config controller"
	I1122 00:22:07.718523       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:22:07.718530       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:22:07.718640       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:22:07.718651       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:22:07.818748       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:22:07.818783       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:22:07.818706       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c3a7e2e1d4e184c40402418868bbc0ed22ca5a6fd75de2dd42a93a8caf2c8ab6] <==
	I1122 00:21:58.103200       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:21:58.105033       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:21:58.105279       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:21:58.105938       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:21:58.106206       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:21:58.108517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:21:58.108599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:21:58.108911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1122 00:21:58.109946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:21:58.109986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:21:58.110053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:21:58.110064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:21:58.110066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:21:58.110066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:21:58.110179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:21:58.110240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:21:58.110285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:21:58.110369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:21:58.110370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:21:58.110377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:21:58.110374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:21:58.110441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:21:58.110459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:21:58.110523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1122 00:21:59.505990       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:22:04 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:04.428234    1449 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:22:04 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:04.429423    1449 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: E1122 00:22:05.368012    1449 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:default-k8s-diff-port-418191\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-418191' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.375441    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba054583-7e23-479e-a042-2c8fdf7c7b0a-lib-modules\") pod \"kube-proxy-xf4dv\" (UID: \"ba054583-7e23-479e-a042-2c8fdf7c7b0a\") " pod="kube-system/kube-proxy-xf4dv"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.375514    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7srt4\" (UniqueName: \"kubernetes.io/projected/ba054583-7e23-479e-a042-2c8fdf7c7b0a-kube-api-access-7srt4\") pod \"kube-proxy-xf4dv\" (UID: \"ba054583-7e23-479e-a042-2c8fdf7c7b0a\") " pod="kube-system/kube-proxy-xf4dv"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.375553    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ba054583-7e23-479e-a042-2c8fdf7c7b0a-kube-proxy\") pod \"kube-proxy-xf4dv\" (UID: \"ba054583-7e23-479e-a042-2c8fdf7c7b0a\") " pod="kube-system/kube-proxy-xf4dv"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.375577    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba054583-7e23-479e-a042-2c8fdf7c7b0a-xtables-lock\") pod \"kube-proxy-xf4dv\" (UID: \"ba054583-7e23-479e-a042-2c8fdf7c7b0a\") " pod="kube-system/kube-proxy-xf4dv"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.475994    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/054d63f4-c84a-4d5e-9731-d8dd34464e73-xtables-lock\") pod \"kindnet-p88n8\" (UID: \"054d63f4-c84a-4d5e-9731-d8dd34464e73\") " pod="kube-system/kindnet-p88n8"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.476052    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsn8w\" (UniqueName: \"kubernetes.io/projected/054d63f4-c84a-4d5e-9731-d8dd34464e73-kube-api-access-xsn8w\") pod \"kindnet-p88n8\" (UID: \"054d63f4-c84a-4d5e-9731-d8dd34464e73\") " pod="kube-system/kindnet-p88n8"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.476154    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/054d63f4-c84a-4d5e-9731-d8dd34464e73-cni-cfg\") pod \"kindnet-p88n8\" (UID: \"054d63f4-c84a-4d5e-9731-d8dd34464e73\") " pod="kube-system/kindnet-p88n8"
	Nov 22 00:22:05 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:05.476629    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/054d63f4-c84a-4d5e-9731-d8dd34464e73-lib-modules\") pod \"kindnet-p88n8\" (UID: \"054d63f4-c84a-4d5e-9731-d8dd34464e73\") " pod="kube-system/kindnet-p88n8"
	Nov 22 00:22:06 default-k8s-diff-port-418191 kubelet[1449]: E1122 00:22:06.486297    1449 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 22 00:22:06 default-k8s-diff-port-418191 kubelet[1449]: E1122 00:22:06.486350    1449 projected.go:196] Error preparing data for projected volume kube-api-access-7srt4 for pod kube-system/kube-proxy-xf4dv: failed to sync configmap cache: timed out waiting for the condition
	Nov 22 00:22:06 default-k8s-diff-port-418191 kubelet[1449]: E1122 00:22:06.486482    1449 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba054583-7e23-479e-a042-2c8fdf7c7b0a-kube-api-access-7srt4 podName:ba054583-7e23-479e-a042-2c8fdf7c7b0a nodeName:}" failed. No retries permitted until 2025-11-22 00:22:06.986444996 +0000 UTC m=+7.129472238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7srt4" (UniqueName: "kubernetes.io/projected/ba054583-7e23-479e-a042-2c8fdf7c7b0a-kube-api-access-7srt4") pod "kube-proxy-xf4dv" (UID: "ba054583-7e23-479e-a042-2c8fdf7c7b0a") : failed to sync configmap cache: timed out waiting for the condition
	Nov 22 00:22:08 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:08.021074    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xf4dv" podStartSLOduration=3.021053473 podStartE2EDuration="3.021053473s" podCreationTimestamp="2025-11-22 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:22:08.02041431 +0000 UTC m=+8.163441554" watchObservedRunningTime="2025-11-22 00:22:08.021053473 +0000 UTC m=+8.164080715"
	Nov 22 00:22:08 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:08.052517    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-p88n8" podStartSLOduration=3.05249855 podStartE2EDuration="3.05249855s" podCreationTimestamp="2025-11-22 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:22:08.051950793 +0000 UTC m=+8.194978035" watchObservedRunningTime="2025-11-22 00:22:08.05249855 +0000 UTC m=+8.195525790"
	Nov 22 00:22:47 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:47.835791    1449 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:22:47 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:47.882540    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7c87a520-6723-4298-9c0d-6bde0b15aec8-tmp\") pod \"storage-provisioner\" (UID: \"7c87a520-6723-4298-9c0d-6bde0b15aec8\") " pod="kube-system/storage-provisioner"
	Nov 22 00:22:47 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:47.882598    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wqgc\" (UniqueName: \"kubernetes.io/projected/7c87a520-6723-4298-9c0d-6bde0b15aec8-kube-api-access-8wqgc\") pod \"storage-provisioner\" (UID: \"7c87a520-6723-4298-9c0d-6bde0b15aec8\") " pod="kube-system/storage-provisioner"
	Nov 22 00:22:47 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:47.882623    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b10676-5bd9-4c0b-8e69-ecfd1e7373a8-config-volume\") pod \"coredns-66bc5c9577-nft87\" (UID: \"73b10676-5bd9-4c0b-8e69-ecfd1e7373a8\") " pod="kube-system/coredns-66bc5c9577-nft87"
	Nov 22 00:22:47 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:47.882715    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6vmp\" (UniqueName: \"kubernetes.io/projected/73b10676-5bd9-4c0b-8e69-ecfd1e7373a8-kube-api-access-k6vmp\") pod \"coredns-66bc5c9577-nft87\" (UID: \"73b10676-5bd9-4c0b-8e69-ecfd1e7373a8\") " pod="kube-system/coredns-66bc5c9577-nft87"
	Nov 22 00:22:49 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:49.116103    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nft87" podStartSLOduration=44.116081028 podStartE2EDuration="44.116081028s" podCreationTimestamp="2025-11-22 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:22:49.116046066 +0000 UTC m=+49.259073319" watchObservedRunningTime="2025-11-22 00:22:49.116081028 +0000 UTC m=+49.259108271"
	Nov 22 00:22:49 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:49.139120    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=44.139092491 podStartE2EDuration="44.139092491s" podCreationTimestamp="2025-11-22 00:22:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:22:49.128313366 +0000 UTC m=+49.271340606" watchObservedRunningTime="2025-11-22 00:22:49.139092491 +0000 UTC m=+49.282119736"
	Nov 22 00:22:51 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:51.102194    1449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crnjt\" (UniqueName: \"kubernetes.io/projected/62d761e0-90e8-4ae1-98f2-3a0febcc01d1-kube-api-access-crnjt\") pod \"busybox\" (UID: \"62d761e0-90e8-4ae1-98f2-3a0febcc01d1\") " pod="default/busybox"
	Nov 22 00:22:54 default-k8s-diff-port-418191 kubelet[1449]: I1122 00:22:54.129962    1449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.075017629 podStartE2EDuration="3.129910601s" podCreationTimestamp="2025-11-22 00:22:51 +0000 UTC" firstStartedPulling="2025-11-22 00:22:51.506690663 +0000 UTC m=+51.649717900" lastFinishedPulling="2025-11-22 00:22:53.561583638 +0000 UTC m=+53.704610872" observedRunningTime="2025-11-22 00:22:54.129699022 +0000 UTC m=+54.272726265" watchObservedRunningTime="2025-11-22 00:22:54.129910601 +0000 UTC m=+54.272937842"
	
	
	==> storage-provisioner [1c7a5352ca64ea1fd8d75825ba8b5b9b860a37d4a5ebd0314369cfebccf8a213] <==
	I1122 00:22:48.375217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:22:48.384699       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:22:48.384760       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:22:48.387058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:48.394507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:22:48.394702       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:22:48.394866       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-418191_79caea8a-e8c0-48e5-8546-6614ce9da2e5!
	I1122 00:22:48.394959       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d3496c18-6ea3-455d-bd46-14aab2590703", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-418191_79caea8a-e8c0-48e5-8546-6614ce9da2e5 became leader
	W1122 00:22:48.397694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:48.402233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:22:48.495931       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-418191_79caea8a-e8c0-48e5-8546-6614ce9da2e5!
	W1122 00:22:50.405395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:50.409767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:52.413787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:52.418958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:54.422120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:54.426237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:56.429543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:56.435467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:58.444594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:22:58.452105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:23:00.458430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:23:00.466066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:23:02.471190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:23:02.481862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-418191 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.76s)

                                                
                                    

Test pass (303/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 21.2
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 11.26
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.43
21 TestBinaryMirror 0.84
22 TestOffline 56.13
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 128.93
29 TestAddons/serial/Volcano 41.14
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 9.46
35 TestAddons/parallel/Registry 14.82
36 TestAddons/parallel/RegistryCreds 0.65
37 TestAddons/parallel/Ingress 19.81
38 TestAddons/parallel/InspektorGadget 10.67
39 TestAddons/parallel/MetricsServer 5.65
41 TestAddons/parallel/CSI 35.58
42 TestAddons/parallel/Headlamp 12
43 TestAddons/parallel/CloudSpanner 5.53
44 TestAddons/parallel/LocalPath 55.64
45 TestAddons/parallel/NvidiaDevicePlugin 5.49
46 TestAddons/parallel/Yakd 10.66
47 TestAddons/parallel/AmdGpuDevicePlugin 5.5
48 TestAddons/StoppedEnableDisable 12.33
49 TestCertOptions 25.75
50 TestCertExpiration 212.91
52 TestForceSystemdFlag 39.74
53 TestForceSystemdEnv 34.56
54 TestDockerEnvContainerd 39.41
58 TestErrorSpam/setup 19.47
59 TestErrorSpam/start 0.68
60 TestErrorSpam/status 0.96
61 TestErrorSpam/pause 1.47
62 TestErrorSpam/unpause 1.5
63 TestErrorSpam/stop 2.16
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.46
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.83
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.86
75 TestFunctional/serial/CacheCmd/cache/add_local 1.91
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.52
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 39.37
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.26
86 TestFunctional/serial/LogsFileCmd 1.29
87 TestFunctional/serial/InvalidService 3.88
89 TestFunctional/parallel/ConfigCmd 0.42
90 TestFunctional/parallel/DashboardCmd 9.53
91 TestFunctional/parallel/DryRun 0.5
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 0.97
97 TestFunctional/parallel/ServiceCmdConnect 8.56
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 37.75
101 TestFunctional/parallel/SSHCmd 0.61
102 TestFunctional/parallel/CpCmd 2.12
103 TestFunctional/parallel/MySQL 24.82
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.92
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
113 TestFunctional/parallel/License 0.42
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.51
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.57
121 TestFunctional/parallel/ImageCommands/Setup 1.8
122 TestFunctional/parallel/ServiceCmd/DeployApp 9.16
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
124 TestFunctional/parallel/ProfileCmd/profile_list 0.4
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.18
127 TestFunctional/parallel/MountCmd/any-port 7.9
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.06
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.83
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.6
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
137 TestFunctional/parallel/ServiceCmd/List 0.45
138 TestFunctional/parallel/MountCmd/specific-port 2.19
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.95
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
141 TestFunctional/parallel/ServiceCmd/Format 0.4
142 TestFunctional/parallel/ServiceCmd/URL 0.51
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.46
146 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 22.23
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 112.56
163 TestMultiControlPlane/serial/DeployApp 5.49
164 TestMultiControlPlane/serial/PingHostFromPods 1.22
165 TestMultiControlPlane/serial/AddWorkerNode 24.02
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
168 TestMultiControlPlane/serial/CopyFile 17.21
169 TestMultiControlPlane/serial/StopSecondaryNode 12.76
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.82
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.92
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 96.8
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.39
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
176 TestMultiControlPlane/serial/StopCluster 36.19
177 TestMultiControlPlane/serial/RestartCluster 55.35
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
179 TestMultiControlPlane/serial/AddSecondaryNode 43.9
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
185 TestJSONOutput/start/Command 38.51
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.77
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.6
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.86
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 38.6
211 TestKicCustomNetwork/use_default_bridge_network 25.75
212 TestKicExistingNetwork 24.15
213 TestKicCustomSubnet 27.18
214 TestKicStaticIP 26.71
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 48.87
219 TestMountStart/serial/StartWithMountFirst 7.36
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 4.5
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.69
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.27
226 TestMountStart/serial/RestartStopped 7.57
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 66.42
231 TestMultiNode/serial/DeployApp2Nodes 4.48
232 TestMultiNode/serial/PingHostFrom2Pods 0.81
233 TestMultiNode/serial/AddNode 26.35
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.66
236 TestMultiNode/serial/CopyFile 9.91
237 TestMultiNode/serial/StopNode 2.28
238 TestMultiNode/serial/StartAfterStop 6.86
239 TestMultiNode/serial/RestartKeepsNodes 69.51
240 TestMultiNode/serial/DeleteNode 5.26
241 TestMultiNode/serial/StopMultiNode 24.1
242 TestMultiNode/serial/RestartMultiNode 45.81
243 TestMultiNode/serial/ValidateNameConflict 25.01
248 TestPreload 112.49
250 TestScheduledStopUnix 94.43
253 TestInsufficientStorage 12.08
254 TestRunningBinaryUpgrade 46.22
256 TestKubernetesUpgrade 337.84
257 TestMissingContainerUpgrade 137.95
259 TestPause/serial/Start 52.28
260 TestPause/serial/SecondStartNoReconfiguration 6.84
261 TestStoppedBinaryUpgrade/Setup 2.74
262 TestStoppedBinaryUpgrade/Upgrade 114.54
263 TestPause/serial/Pause 0.76
264 TestPause/serial/VerifyStatus 0.37
265 TestPause/serial/Unpause 0.67
266 TestPause/serial/PauseAgain 0.97
267 TestPause/serial/DeletePaused 2.97
268 TestPause/serial/VerifyDeletedResources 0.45
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
278 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
279 TestNoKubernetes/serial/StartWithK8s 22.86
283 TestNoKubernetes/serial/StartWithStopK8s 6.41
288 TestNetworkPlugins/group/false 3.95
292 TestNoKubernetes/serial/Start 4.12
294 TestStartStop/group/old-k8s-version/serial/FirstStart 54.25
295 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
296 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
297 TestNoKubernetes/serial/ProfileList 38.45
299 TestStartStop/group/no-preload/serial/FirstStart 51.67
300 TestNoKubernetes/serial/Stop 1.34
301 TestNoKubernetes/serial/StartNoArgs 7
302 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
304 TestStartStop/group/embed-certs/serial/FirstStart 41.73
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.07
308 TestStartStop/group/old-k8s-version/serial/Stop 12.2
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
310 TestStartStop/group/no-preload/serial/Stop 12.21
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/old-k8s-version/serial/SecondStart 44.15
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
314 TestStartStop/group/no-preload/serial/SecondStart 49.19
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
317 TestStartStop/group/embed-certs/serial/Stop 12.3
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
319 TestStartStop/group/embed-certs/serial/SecondStart 48.48
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
322 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
323 TestStartStop/group/old-k8s-version/serial/Pause 3.04
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
326 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 77.45
327 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.35
329 TestStartStop/group/no-preload/serial/Pause 3.7
331 TestStartStop/group/newest-cni/serial/FirstStart 30.18
332 TestNetworkPlugins/group/auto/Start 40.97
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.09
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
336 TestStartStop/group/embed-certs/serial/Pause 3.04
337 TestNetworkPlugins/group/kindnet/Start 44.35
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
340 TestStartStop/group/newest-cni/serial/Stop 3.54
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
342 TestStartStop/group/newest-cni/serial/SecondStart 11.41
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
346 TestStartStop/group/newest-cni/serial/Pause 3.13
347 TestNetworkPlugins/group/auto/KubeletFlags 0.35
348 TestNetworkPlugins/group/auto/NetCatPod 9.23
349 TestNetworkPlugins/group/calico/Start 56.59
350 TestNetworkPlugins/group/auto/DNS 0.14
351 TestNetworkPlugins/group/auto/Localhost 0.11
352 TestNetworkPlugins/group/auto/HairPin 0.11
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
356 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.3
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
358 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
359 TestNetworkPlugins/group/custom-flannel/Start 54.9
360 TestNetworkPlugins/group/kindnet/DNS 0.22
361 TestNetworkPlugins/group/kindnet/Localhost 0.12
362 TestNetworkPlugins/group/kindnet/HairPin 0.15
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.76
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/enable-default-cni/Start 62.18
367 TestNetworkPlugins/group/calico/KubeletFlags 0.44
368 TestNetworkPlugins/group/calico/NetCatPod 11.42
369 TestNetworkPlugins/group/calico/DNS 0.13
370 TestNetworkPlugins/group/calico/Localhost 0.11
371 TestNetworkPlugins/group/calico/HairPin 0.13
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.21
374 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
375 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
376 TestNetworkPlugins/group/custom-flannel/DNS 0.14
377 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
378 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
379 TestNetworkPlugins/group/flannel/Start 46.86
380 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
381 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.59
382 TestNetworkPlugins/group/bridge/Start 43.44
383 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
384 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.26
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
390 TestNetworkPlugins/group/flannel/NetCatPod 9.2
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
392 TestNetworkPlugins/group/bridge/NetCatPod 8.21
393 TestNetworkPlugins/group/bridge/DNS 0.13
394 TestNetworkPlugins/group/bridge/Localhost 0.11
395 TestNetworkPlugins/group/bridge/HairPin 0.12
396 TestNetworkPlugins/group/flannel/DNS 0.12
397 TestNetworkPlugins/group/flannel/Localhost 0.12
398 TestNetworkPlugins/group/flannel/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (21.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-820696 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-820696 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (21.199711558s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (21.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1121 23:46:38.325994   14530 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1121 23:46:38.326088   14530 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-820696
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-820696: exit status 85 (77.179846ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-820696 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-820696 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:46:17
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:46:17.179689   14542 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:46:17.179789   14542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:17.179797   14542 out.go:374] Setting ErrFile to fd 2...
	I1121 23:46:17.179801   14542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:17.179989   14542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	W1121 23:46:17.180106   14542 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21934-9059/.minikube/config/config.json: open /home/jenkins/minikube-integration/21934-9059/.minikube/config/config.json: no such file or directory
	I1121 23:46:17.180566   14542 out.go:368] Setting JSON to true
	I1121 23:46:17.181439   14542 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1716,"bootTime":1763767061,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:46:17.181491   14542 start.go:143] virtualization: kvm guest
	I1121 23:46:17.185600   14542 out.go:99] [download-only-820696] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 23:46:17.185757   14542 notify.go:221] Checking for updates...
	W1121 23:46:17.185766   14542 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball: no such file or directory
	I1121 23:46:17.187015   14542 out.go:171] MINIKUBE_LOCATION=21934
	I1121 23:46:17.188345   14542 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:46:17.189837   14542 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1121 23:46:17.191296   14542 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1121 23:46:17.192545   14542 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1121 23:46:17.194877   14542 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 23:46:17.195122   14542 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:46:17.221452   14542 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 23:46:17.221563   14542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:46:17.605156   14542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-21 23:46:17.594505294 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:46:17.605324   14542 docker.go:319] overlay module found
	I1121 23:46:17.607219   14542 out.go:99] Using the docker driver based on user configuration
	I1121 23:46:17.607276   14542 start.go:309] selected driver: docker
	I1121 23:46:17.607289   14542 start.go:930] validating driver "docker" against <nil>
	I1121 23:46:17.607408   14542 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:46:17.670408   14542 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-21 23:46:17.660398111 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:46:17.670569   14542 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:46:17.671109   14542 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1121 23:46:17.671311   14542 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 23:46:17.673480   14542 out.go:171] Using Docker driver with root privileges
	I1121 23:46:17.674744   14542 cni.go:84] Creating CNI manager for ""
	I1121 23:46:17.674822   14542 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 23:46:17.674842   14542 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 23:46:17.674969   14542 start.go:353] cluster config:
	{Name:download-only-820696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-820696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:46:17.676348   14542 out.go:99] Starting "download-only-820696" primary control-plane node in "download-only-820696" cluster
	I1121 23:46:17.676371   14542 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 23:46:17.677687   14542 out.go:99] Pulling base image v0.0.48-1763588073-21934 ...
	I1121 23:46:17.677729   14542 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 23:46:17.677830   14542 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1121 23:46:17.695142   14542 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:46:17.695374   14542 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory
	I1121 23:46:17.695472   14542 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:46:17.774719   14542 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1121 23:46:17.774770   14542 cache.go:65] Caching tarball of preloaded images
	I1121 23:46:17.774964   14542 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 23:46:17.777006   14542 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1121 23:46:17.777037   14542 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1121 23:46:17.873396   14542 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1121 23:46:17.873529   14542 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1121 23:46:28.504305   14542 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1121 23:46:28.504692   14542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/download-only-820696/config.json ...
	I1121 23:46:28.504726   14542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/download-only-820696/config.json: {Name:mk15039ebf0d676f4f2a009e939884ae8dac6067 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:46:28.504906   14542 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 23:46:28.505082   14542 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-820696 host does not exist
	  To start a cluster, run: "minikube start -p download-only-820696"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-820696
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-642600 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-642600 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.258152823s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1121 23:46:50.047216   14530 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1121 23:46:50.047274   14530 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-642600
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-642600: exit status 85 (75.647629ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-820696 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-820696 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ delete  │ -p download-only-820696                                                                                                                                                               │ download-only-820696 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │ 21 Nov 25 23:46 UTC │
	│ start   │ -o=json --download-only -p download-only-642600 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-642600 │ jenkins │ v1.37.0 │ 21 Nov 25 23:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:46:38
	Running on machine: ubuntu-20-agent-6
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:46:38.848778   14946 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:46:38.849063   14946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:38.849074   14946 out.go:374] Setting ErrFile to fd 2...
	I1121 23:46:38.849081   14946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:46:38.849311   14946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1121 23:46:38.849792   14946 out.go:368] Setting JSON to true
	I1121 23:46:38.850595   14946 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":1738,"bootTime":1763767061,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:46:38.850656   14946 start.go:143] virtualization: kvm guest
	I1121 23:46:38.852863   14946 out.go:99] [download-only-642600] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 23:46:38.853027   14946 notify.go:221] Checking for updates...
	I1121 23:46:38.854620   14946 out.go:171] MINIKUBE_LOCATION=21934
	I1121 23:46:38.856441   14946 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:46:38.857859   14946 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1121 23:46:38.859379   14946 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1121 23:46:38.860903   14946 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1121 23:46:38.863500   14946 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 23:46:38.863760   14946 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:46:38.888227   14946 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 23:46:38.888336   14946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:46:38.947973   14946 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-21 23:46:38.938283004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:46:38.948129   14946 docker.go:319] overlay module found
	I1121 23:46:38.949746   14946 out.go:99] Using the docker driver based on user configuration
	I1121 23:46:38.949786   14946 start.go:309] selected driver: docker
	I1121 23:46:38.949795   14946 start.go:930] validating driver "docker" against <nil>
	I1121 23:46:38.949902   14946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:46:39.013848   14946 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-21 23:46:39.004224782 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:46:39.014047   14946 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:46:39.014607   14946 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1121 23:46:39.014756   14946 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 23:46:39.016985   14946 out.go:171] Using Docker driver with root privileges
	I1121 23:46:39.018177   14946 cni.go:84] Creating CNI manager for ""
	I1121 23:46:39.018244   14946 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 23:46:39.018292   14946 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 23:46:39.018381   14946 start.go:353] cluster config:
	{Name:download-only-642600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-642600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:46:39.019756   14946 out.go:99] Starting "download-only-642600" primary control-plane node in "download-only-642600" cluster
	I1121 23:46:39.019773   14946 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 23:46:39.021139   14946 out.go:99] Pulling base image v0.0.48-1763588073-21934 ...
	I1121 23:46:39.021177   14946 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 23:46:39.021271   14946 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1121 23:46:39.037895   14946 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:46:39.038081   14946 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory
	I1121 23:46:39.038101   14946 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory, skipping pull
	I1121 23:46:39.038106   14946 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in cache, skipping pull
	I1121 23:46:39.038114   14946 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e as a tarball
	I1121 23:46:39.356202   14946 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	I1121 23:46:39.356240   14946 cache.go:65] Caching tarball of preloaded images
	I1121 23:46:39.356450   14946 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 23:46:39.358459   14946 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1121 23:46:39.358487   14946 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1121 23:46:39.457607   14946 preload.go:295] Got checksum from GCS API "5d6e976daeaa84851976fc4d674fd8f4"
	I1121 23:46:39.457650   14946 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:5d6e976daeaa84851976fc4d674fd8f4 -> /home/jenkins/minikube-integration/21934-9059/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-642600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-642600"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-642600
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-342737 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-342737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-342737
--- PASS: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
I1121 23:46:51.223244   14530 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-718343 --alsologtostderr --binary-mirror http://127.0.0.1:40931 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-718343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-718343
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (56.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-796119 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-796119 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (53.303841043s)
helpers_test.go:175: Cleaning up "offline-containerd-796119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-796119
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-796119: (2.82830009s)
--- PASS: TestOffline (56.13s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-368820
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-368820: exit status 85 (66.809657ms)

                                                
                                                
-- stdout --
	* Profile "addons-368820" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-368820"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-368820
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-368820: exit status 85 (71.175178ms)

                                                
                                                
-- stdout --
	* Profile "addons-368820" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-368820"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (128.93s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-368820 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-368820 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.932574304s)
--- PASS: TestAddons/Setup (128.93s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.14s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 14.293014ms
addons_test.go:876: volcano-admission stabilized in 14.339205ms
addons_test.go:868: volcano-scheduler stabilized in 14.370585ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-rr4tv" [0cb55165-8559-4b75-bec6-a3581357d0cd] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003926067s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-hg79r" [86b70bfa-c6a2-447c-9747-ffc46b83ce95] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003767574s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-nfc57" [fb320f79-e4c1-4a26-977a-a98c0819cbdf] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003206582s
addons_test.go:903: (dbg) Run:  kubectl --context addons-368820 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-368820 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-368820 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [10e12c75-a9a0-4558-9738-95675f7dbcc1] Pending
helpers_test.go:352: "test-job-nginx-0" [10e12c75-a9a0-4558-9738-95675f7dbcc1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [10e12c75-a9a0-4558-9738-95675f7dbcc1] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.002959396s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-368820 addons disable volcano --alsologtostderr -v=1: (11.776562448s)
--- PASS: TestAddons/serial/Volcano (41.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-368820 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-368820 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-368820 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-368820 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d67cb06e-0527-4607-90bf-09079bff1f14] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d67cb06e-0527-4607-90bf-09079bff1f14] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003563654s
addons_test.go:694: (dbg) Run:  kubectl --context addons-368820 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-368820 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-368820 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.414501ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-rvqbs" [cdb82e2e-26b9-42b7-90b1-7cf5c5aefa7c] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002201555s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-b54rz" [9453a33f-aed9-4ad6-9fc6-3f61fe7d1f5a] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003227453s
addons_test.go:392: (dbg) Run:  kubectl --context addons-368820 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-368820 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-368820 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.022853054s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 ip
2025/11/21 23:50:14 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.82s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.017837ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-368820
addons_test.go:332: (dbg) Run:  kubectl --context addons-368820 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-368820 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-368820 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-368820 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [4daec2ea-6ec3-4e90-92d7-64a171f9fa0f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [4daec2ea-6ec3-4e90-92d7-64a171f9fa0f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003509662s
I1121 23:50:31.442586   14530 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-368820 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-368820 addons disable ingress --alsologtostderr -v=1: (7.669543712s)
--- PASS: TestAddons/parallel/Ingress (19.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
I1121 23:50:00.697042   14530 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-hw5pm" [b0392791-542f-48d7-8945-b405010a82ce] Running
I1121 23:50:00.699908   14530 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1121 23:50:00.699933   14530 kapi.go:107] duration metric: took 2.91508ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003582703s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-368820 addons disable inspektor-gadget --alsologtostderr -v=1: (5.666431856s)
--- PASS: TestAddons/parallel/InspektorGadget (10.67s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.805592ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-9phh6" [f11dc3f7-a22d-40db-b8a2-8fc40de8da7e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002784583s
addons_test.go:463: (dbg) Run:  kubectl --context addons-368820 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.65s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 2.925037ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-368820 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-368820 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [7f51b3e3-0372-4060-b3b0-5516ac16087a] Pending
helpers_test.go:352: "task-pv-pod" [7f51b3e3-0372-4060-b3b0-5516ac16087a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [7f51b3e3-0372-4060-b3b0-5516ac16087a] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003244524s
addons_test.go:572: (dbg) Run:  kubectl --context addons-368820 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-368820 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-368820 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-368820 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-368820 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-368820 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-368820 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [36a1b574-a5ea-485f-b0a0-491b4bd6608b] Pending
helpers_test.go:352: "task-pv-pod-restore" [36a1b574-a5ea-485f-b0a0-491b4bd6608b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [36a1b574-a5ea-485f-b0a0-491b4bd6608b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004532104s
addons_test.go:614: (dbg) Run:  kubectl --context addons-368820 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-368820 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-368820 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-368820 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.566167491s)
--- PASS: TestAddons/parallel/CSI (35.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-368820 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-rmv94" [782cf3e6-3039-492c-9bcf-4144fccd4573] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-rmv94" [782cf3e6-3039-492c-9bcf-4144fccd4573] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003748569s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (12.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-2b6dz" [3ed9dccf-bc0c-4ab1-a009-7ba863fb615e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003110953s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.64s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-368820 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-368820 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-368820 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [19a0244b-ecf7-4803-9c74-0d82fbee81c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [19a0244b-ecf7-4803-9c74-0d82fbee81c9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [19a0244b-ecf7-4803-9c74-0d82fbee81c9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003363482s
addons_test.go:967: (dbg) Run:  kubectl --context addons-368820 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 ssh "cat /opt/local-path-provisioner/pvc-fa820c21-119a-4883-82cf-70abe2878721_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-368820 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-368820 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-368820 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.756557253s)
--- PASS: TestAddons/parallel/LocalPath (55.64s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-7c25n" [1b5cc3bd-fa54-419a-97b6-b809fd8d3459] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003399333s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-8j9wq" [564e7a14-61f4-4559-ab9b-639933a674b4] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004139084s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-368820 addons disable yakd --alsologtostderr -v=1: (5.651913723s)
--- PASS: TestAddons/parallel/Yakd (10.66s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-7jsl5" [a54ca5d8-d02c-4a79-af4b-a3442e19b433] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003663783s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-368820 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-368820
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-368820: (12.036795066s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-368820
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-368820
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-368820
--- PASS: TestAddons/StoppedEnableDisable (12.33s)

                                                
                                    
x
+
TestCertOptions (25.75s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-220111 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-220111 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (22.524311061s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-220111 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-220111 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-220111 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-220111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-220111
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-220111: (2.355404214s)
--- PASS: TestCertOptions (25.75s)

                                                
                                    
x
+
TestCertExpiration (212.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-427330 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-427330 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (24.153080333s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-427330 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-427330 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.289649268s)
helpers_test.go:175: Cleaning up "cert-expiration-427330" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-427330
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-427330: (2.46900406s)
--- PASS: TestCertExpiration (212.91s)

                                                
                                    
x
+
TestForceSystemdFlag (39.74s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-034864 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1122 00:15:30.018759   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-034864 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.799578738s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-034864 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-034864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-034864
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-034864: (4.627971204s)
--- PASS: TestForceSystemdFlag (39.74s)

                                                
                                    
x
+
TestForceSystemdEnv (34.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-873830 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-873830 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.521433153s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-873830 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-873830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-873830
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-873830: (2.719355321s)
--- PASS: TestForceSystemdEnv (34.56s)

                                                
                                    
x
+
TestDockerEnvContainerd (39.41s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-389604 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-389604 --driver=docker  --container-runtime=containerd: (23.284636248s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-389604"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-389604": (1.006621042s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXOyr4ke/agent.38026" SSH_AGENT_PID="38027" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXOyr4ke/agent.38026" SSH_AGENT_PID="38027" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXOyr4ke/agent.38026" SSH_AGENT_PID="38027" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.870850936s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-XXXXXXOyr4ke/agent.38026" SSH_AGENT_PID="38027" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-389604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-389604
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-389604: (2.316610857s)
--- PASS: TestDockerEnvContainerd (39.41s)

                                                
                                    
x
+
TestErrorSpam/setup (19.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-569549 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-569549 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-569549 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-569549 --driver=docker  --container-runtime=containerd: (19.466089701s)
--- PASS: TestErrorSpam/setup (19.47s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (2.16s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 stop: (1.947677854s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-569549 --log_dir /tmp/nospam-569549 stop
--- PASS: TestErrorSpam/stop (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21934-9059/.minikube/files/etc/test/nested/copy/14530/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383183 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-383183 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (37.45676896s)
--- PASS: TestFunctional/serial/StartWithProxy (37.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1121 23:53:07.400467   14530 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383183 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-383183 --alsologtostderr -v=8: (5.827485399s)
functional_test.go:678: soft start took 5.828175822s for "functional-383183" cluster.
I1121 23:53:13.228314   14530 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (5.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-383183 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-383183 cache add registry.k8s.io/pause:3.3: (1.079990859s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-383183 /tmp/TestFunctionalserialCacheCmdcacheadd_local1879066280/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 cache add minikube-local-cache-test:functional-383183
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-383183 cache add minikube-local-cache-test:functional-383183: (1.55331762s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 cache delete minikube-local-cache-test:functional-383183
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-383183
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383183 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (292.475746ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 kubectl -- --context functional-383183 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-383183 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383183 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-383183 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.367820411s)
functional_test.go:776: restart took 39.367969672s for "functional-383183" cluster.
I1121 23:53:59.804349   14530 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (39.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-383183 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 logs
E1121 23:54:01.070378   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:54:01.076772   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:54:01.088335   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:54:01.110498   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-383183 logs: (1.262301025s)
--- PASS: TestFunctional/serial/LogsCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 logs --file /tmp/TestFunctionalserialLogsFileCmd3900841301/001/logs.txt
E1121 23:54:01.152378   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:54:01.233821   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:54:01.395383   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:54:01.717110   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:54:02.359204   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-383183 logs --file /tmp/TestFunctionalserialLogsFileCmd3900841301/001/logs.txt: (1.289209134s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-383183 apply -f testdata/invalidsvc.yaml
E1121 23:54:03.640708   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-383183
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-383183: exit status 115 (339.548475ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31169 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-383183 delete -f testdata/invalidsvc.yaml
E1121 23:54:06.202116   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/serial/InvalidService (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383183 config get cpus: exit status 14 (71.180039ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383183 config get cpus: exit status 14 (69.197114ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-383183 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-383183 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 57527: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383183 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-383183 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (203.34336ms)

                                                
                                                
-- stdout --
	* [functional-383183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:54:15.176661   56163 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:54:15.176974   56163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:54:15.176984   56163 out.go:374] Setting ErrFile to fd 2...
	I1121 23:54:15.176989   56163 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:54:15.177287   56163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1121 23:54:15.177959   56163 out.go:368] Setting JSON to false
	I1121 23:54:15.179475   56163 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2194,"bootTime":1763767061,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:54:15.179592   56163 start.go:143] virtualization: kvm guest
	I1121 23:54:15.181678   56163 out.go:179] * [functional-383183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1121 23:54:15.183662   56163 notify.go:221] Checking for updates...
	I1121 23:54:15.183712   56163 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:54:15.185519   56163 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:54:15.187194   56163 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1121 23:54:15.188695   56163 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1121 23:54:15.190200   56163 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 23:54:15.192473   56163 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:54:15.194092   56163 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 23:54:15.194748   56163 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:54:15.223168   56163 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 23:54:15.223355   56163 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:54:15.290339   56163 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-21 23:54:15.279404379 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:54:15.290449   56163 docker.go:319] overlay module found
	I1121 23:54:15.292329   56163 out.go:179] * Using the docker driver based on existing profile
	I1121 23:54:15.293351   56163 start.go:309] selected driver: docker
	I1121 23:54:15.293368   56163 start.go:930] validating driver "docker" against &{Name:functional-383183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-383183 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:54:15.293484   56163 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:54:15.295476   56163 out.go:203] 
	W1121 23:54:15.296964   56163 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1121 23:54:15.298441   56163 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383183 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-383183 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-383183 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (199.475479ms)

                                                
                                                
-- stdout --
	* [functional-383183] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:54:14.962104   56002 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:54:14.962432   56002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:54:14.962444   56002 out.go:374] Setting ErrFile to fd 2...
	I1121 23:54:14.962451   56002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:54:14.962883   56002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1121 23:54:14.963486   56002 out.go:368] Setting JSON to false
	I1121 23:54:14.964796   56002 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":2194,"bootTime":1763767061,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1121 23:54:14.964882   56002 start.go:143] virtualization: kvm guest
	I1121 23:54:14.966977   56002 out.go:179] * [functional-383183] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1121 23:54:14.968316   56002 notify.go:221] Checking for updates...
	I1121 23:54:14.968341   56002 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:54:14.969751   56002 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:54:14.971117   56002 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1121 23:54:14.972619   56002 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1121 23:54:14.974091   56002 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1121 23:54:14.975642   56002 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:54:14.977451   56002 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 23:54:14.978097   56002 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:54:15.007728   56002 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1121 23:54:15.007835   56002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:54:15.080655   56002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-21 23:54:15.066085045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:54:15.080822   56002 docker.go:319] overlay module found
	I1121 23:54:15.083141   56002 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1121 23:54:15.084843   56002 start.go:309] selected driver: docker
	I1121 23:54:15.084867   56002 start.go:930] validating driver "docker" against &{Name:functional-383183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-383183 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:54:15.085063   56002 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:54:15.088019   56002 out.go:203] 
	W1121 23:54:15.092173   56002 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1121 23:54:15.093462   56002 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-383183 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-383183 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-8bq4h" [05a762a1-c3c0-4d99-bd73-f7f41e435cf7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-8bq4h" [05a762a1-c3c0-4d99-bd73-f7f41e435cf7] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.00418225s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30370
functional_test.go:1680: http://192.168.49.2:30370: success! body:
Request served by hello-node-connect-7d85dfc575-8bq4h

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30370
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [aa7d3a11-f7c2-4cbd-a628-c349f16b875e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003869086s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-383183 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-383183 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-383183 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-383183 apply -f testdata/storage-provisioner/pod.yaml
I1121 23:54:25.344854   14530 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9371da7f-81ff-4935-823c-ace158aed5ca] Pending
helpers_test.go:352: "sp-pod" [9371da7f-81ff-4935-823c-ace158aed5ca] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9371da7f-81ff-4935-823c-ace158aed5ca] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.003949264s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-383183 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-383183 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-383183 delete -f testdata/storage-provisioner/pod.yaml: (1.007766011s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-383183 apply -f testdata/storage-provisioner/pod.yaml
I1121 23:54:50.586650   14530 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [68b124c6-6f5a-406f-9854-6463897b8557] Pending
helpers_test.go:352: "sp-pod" [68b124c6-6f5a-406f-9854-6463897b8557] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [68b124c6-6f5a-406f-9854-6463897b8557] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00409355s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-383183 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh -n functional-383183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 cp functional-383183:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd568582623/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh -n functional-383183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh -n functional-383183 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-383183 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-7f9mz" [a08eba20-eae2-498e-82da-60402b8b1cce] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-7f9mz" [a08eba20-eae2-498e-82da-60402b8b1cce] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.003773817s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-383183 exec mysql-5bb876957f-7f9mz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-383183 exec mysql-5bb876957f-7f9mz -- mysql -ppassword -e "show databases;": exit status 1 (121.374649ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1121 23:54:41.114219   14530 retry.go:31] will retry after 1.092524732s: exit status 1
E1121 23:54:42.047318   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-383183 exec mysql-5bb876957f-7f9mz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-383183 exec mysql-5bb876957f-7f9mz -- mysql -ppassword -e "show databases;": exit status 1 (105.47792ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1121 23:54:42.313549   14530 retry.go:31] will retry after 1.168654894s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-383183 exec mysql-5bb876957f-7f9mz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.82s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14530/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "sudo cat /etc/test/nested/copy/14530/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14530.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "sudo cat /etc/ssl/certs/14530.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14530.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "sudo cat /usr/share/ca-certificates/14530.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/145302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "sudo cat /etc/ssl/certs/145302.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/145302.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "sudo cat /usr/share/ca-certificates/145302.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-383183 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383183 ssh "sudo systemctl is-active docker": exit status 1 (303.310106ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383183 ssh "sudo systemctl is-active crio": exit status 1 (311.795448ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383183 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-383183
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-383183
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383183 image ls --format short --alsologtostderr:
I1121 23:54:25.519955   61222 out.go:360] Setting OutFile to fd 1 ...
I1121 23:54:25.520088   61222 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:54:25.520099   61222 out.go:374] Setting ErrFile to fd 2...
I1121 23:54:25.520106   61222 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:54:25.520330   61222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
I1121 23:54:25.520892   61222 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:54:25.521026   61222 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:54:25.521448   61222 cli_runner.go:164] Run: docker container inspect functional-383183 --format={{.State.Status}}
I1121 23:54:25.541039   61222 ssh_runner.go:195] Run: systemctl --version
I1121 23:54:25.541092   61222 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-383183
I1121 23:54:25.562048   61222 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32784 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/functional-383183/id_rsa Username:docker}
I1121 23:54:25.651976   61222 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383183 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-383183  │ sha256:e9b33d │ 991B   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:c3994b │ 27.1MB │
│ localhost/my-image                          │ functional-383183  │ sha256:65503b │ 775kB  │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:7dd6aa │ 17.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/kicbase/echo-server               │ functional-383183  │ sha256:9056ab │ 2.37MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:5f1f52 │ 74.3MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:c80c8d │ 22.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:fc2517 │ 26MB   │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383183 image ls --format table --alsologtostderr:
I1121 23:54:30.809183   61852 out.go:360] Setting OutFile to fd 1 ...
I1121 23:54:30.809311   61852 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:54:30.809321   61852 out.go:374] Setting ErrFile to fd 2...
I1121 23:54:30.809326   61852 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:54:30.809552   61852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
I1121 23:54:30.810303   61852 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:54:30.810421   61852 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:54:30.811088   61852 cli_runner.go:164] Run: docker container inspect functional-383183 --format={{.State.Status}}
I1121 23:54:30.833614   61852 ssh_runner.go:195] Run: systemctl --version
I1121 23:54:30.833663   61852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-383183
I1121 23:54:30.855624   61852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32784 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/functional-383183/id_rsa Username:docker}
I1121 23:54:30.953973   61852 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383183 image ls --format json --alsologtostderr:
[{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"25963718"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoD
igests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-383183"],"size":"2372971"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler
@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"17385568"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:65503ba77500eeab0f5ea6b62abe6f8ee5d495a13297d53a1fce6865c09ea2c8","repoDigests":[],"repoTags":["localhost/my-image:functional-383183"],"size":"774888"},{"id":"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"22820214"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:e9b33
da2c6059b6a88fe2fc2b57165061228b8d97c3d45cf6fe0780b7f4d6557","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-383183"],"size":"991"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"74311308"},{"id":"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"27061991"},{"id":"sha256:0184c1613d92931126feb4c548e
5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383183 image ls --format json --alsologtostderr:
I1121 23:54:30.556807   61793 out.go:360] Setting OutFile to fd 1 ...
I1121 23:54:30.557097   61793 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:54:30.557107   61793 out.go:374] Setting ErrFile to fd 2...
I1121 23:54:30.557111   61793 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:54:30.557403   61793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
I1121 23:54:30.558089   61793 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:54:30.558247   61793 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:54:30.558758   61793 cli_runner.go:164] Run: docker container inspect functional-383183 --format={{.State.Status}}
I1121 23:54:30.580166   61793 ssh_runner.go:195] Run: systemctl --version
I1121 23:54:30.580231   61793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-383183
I1121 23:54:30.602309   61793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32784 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/functional-383183/id_rsa Username:docker}
I1121 23:54:30.697080   61793 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-383183 image ls --format yaml --alsologtostderr:
- id: sha256:e9b33da2c6059b6a88fe2fc2b57165061228b8d97c3d45cf6fe0780b7f4d6557
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-383183
size: "991"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "25963718"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-383183
size: "2372971"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "17385568"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "74311308"
- id: sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "27061991"
- id: sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "22820214"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383183 image ls --format yaml --alsologtostderr:
I1121 23:54:25.750146   61314 out.go:360] Setting OutFile to fd 1 ...
I1121 23:54:25.750330   61314 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:54:25.750344   61314 out.go:374] Setting ErrFile to fd 2...
I1121 23:54:25.750351   61314 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:54:25.750608   61314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
I1121 23:54:25.751314   61314 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:54:25.751450   61314 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:54:25.752088   61314 cli_runner.go:164] Run: docker container inspect functional-383183 --format={{.State.Status}}
I1121 23:54:25.773355   61314 ssh_runner.go:195] Run: systemctl --version
I1121 23:54:25.773419   61314 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-383183
I1121 23:54:25.793550   61314 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32784 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/functional-383183/id_rsa Username:docker}
I1121 23:54:25.885100   61314 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383183 ssh pgrep buildkitd: exit status 1 (267.73392ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image build -t localhost/my-image:functional-383183 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-383183 image build -t localhost/my-image:functional-383183 testdata/build --alsologtostderr: (4.040223703s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-383183 image build -t localhost/my-image:functional-383183 testdata/build --alsologtostderr:
I1121 23:54:26.244526   61505 out.go:360] Setting OutFile to fd 1 ...
I1121 23:54:26.244807   61505 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:54:26.244818   61505 out.go:374] Setting ErrFile to fd 2...
I1121 23:54:26.244822   61505 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:54:26.245052   61505 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
I1121 23:54:26.245661   61505 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:54:26.246364   61505 config.go:182] Loaded profile config "functional-383183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:54:26.246804   61505 cli_runner.go:164] Run: docker container inspect functional-383183 --format={{.State.Status}}
I1121 23:54:26.266427   61505 ssh_runner.go:195] Run: systemctl --version
I1121 23:54:26.266503   61505 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-383183
I1121 23:54:26.285478   61505 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32784 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/functional-383183/id_rsa Username:docker}
I1121 23:54:26.378052   61505 build_images.go:162] Building image from path: /tmp/build.1988338371.tar
I1121 23:54:26.378143   61505 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1121 23:54:26.386878   61505 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1988338371.tar
I1121 23:54:26.391047   61505 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1988338371.tar: stat -c "%s %y" /var/lib/minikube/build/build.1988338371.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1988338371.tar': No such file or directory
I1121 23:54:26.391081   61505 ssh_runner.go:362] scp /tmp/build.1988338371.tar --> /var/lib/minikube/build/build.1988338371.tar (3072 bytes)
I1121 23:54:26.409777   61505 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1988338371
I1121 23:54:26.418120   61505 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1988338371 -xf /var/lib/minikube/build/build.1988338371.tar
I1121 23:54:26.427469   61505 containerd.go:394] Building image: /var/lib/minikube/build/build.1988338371
I1121 23:54:26.427551   61505 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1988338371 --local dockerfile=/var/lib/minikube/build/build.1988338371 --output type=image,name=localhost/my-image:functional-383183
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d028c8436cbc709d11908540406cb4117f03535b606679d09eb1f6c1f5a14eda done
#8 exporting config sha256:65503ba77500eeab0f5ea6b62abe6f8ee5d495a13297d53a1fce6865c09ea2c8 done
#8 naming to localhost/my-image:functional-383183 done
#8 DONE 0.1s
I1121 23:54:30.196333   61505 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1988338371 --local dockerfile=/var/lib/minikube/build/build.1988338371 --output type=image,name=localhost/my-image:functional-383183: (3.768743321s)
I1121 23:54:30.196423   61505 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1988338371
I1121 23:54:30.207434   61505 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1988338371.tar
I1121 23:54:30.217658   61505 build_images.go:218] Built localhost/my-image:functional-383183 from /tmp/build.1988338371.tar
I1121 23:54:30.217697   61505 build_images.go:134] succeeded building to: functional-383183
I1121 23:54:30.217707   61505 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.781542971s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-383183
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-383183 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-383183 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-9gfgp" [d5e0ac78-9915-454a-874d-b12315a584bd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-9gfgp" [d5e0ac78-9915-454a-874d-b12315a584bd] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003992576s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "337.708994ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.014877ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "343.638785ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "68.79406ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image load --daemon kicbase/echo-server:functional-383183 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383183 /tmp/TestFunctionalparallelMountCmdany-port3679953873/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763769248187332647" to /tmp/TestFunctionalparallelMountCmdany-port3679953873/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763769248187332647" to /tmp/TestFunctionalparallelMountCmdany-port3679953873/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763769248187332647" to /tmp/TestFunctionalparallelMountCmdany-port3679953873/001/test-1763769248187332647
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383183 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.696993ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 23:54:08.487382   14530 retry.go:31] will retry after 439.978572ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 21 23:54 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 21 23:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 21 23:54 test-1763769248187332647
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh cat /mount-9p/test-1763769248187332647
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-383183 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [581f2d46-997e-42ac-9025-8353782c81e5] Pending
helpers_test.go:352: "busybox-mount" [581f2d46-997e-42ac-9025-8353782c81e5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [581f2d46-997e-42ac-9025-8353782c81e5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [581f2d46-997e-42ac-9025-8353782c81e5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003361663s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-383183 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383183 /tmp/TestFunctionalparallelMountCmdany-port3679953873/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image load --daemon kicbase/echo-server:functional-383183 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-383183
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image load --daemon kicbase/echo-server:functional-383183 --alsologtostderr
E1121 23:54:11.323771   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image save kicbase/echo-server:functional-383183 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image rm kicbase/echo-server:functional-383183 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-383183
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 image save --daemon kicbase/echo-server:functional-383183 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-383183
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383183 /tmp/TestFunctionalparallelMountCmdspecific-port2431600924/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383183 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (366.197133ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 23:54:16.458703   14530 retry.go:31] will retry after 691.036199ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383183 /tmp/TestFunctionalparallelMountCmdspecific-port2431600924/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383183 ssh "sudo umount -f /mount-9p": exit status 1 (311.785698ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-383183 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383183 /tmp/TestFunctionalparallelMountCmdspecific-port2431600924/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 service list -o json
functional_test.go:1504: Took "950.338549ms" to run "out/minikube-linux-amd64 -p functional-383183 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31269
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31269
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2229101655/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2229101655/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-383183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2229101655/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-383183 ssh "findmnt -T" /mount1: exit status 1 (409.687153ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 23:54:18.695243   14530 retry.go:31] will retry after 388.246418ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-383183 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-383183 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2229101655/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2229101655/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-383183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2229101655/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-383183 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-383183 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-383183 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 60473: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-383183 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-383183 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-383183 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [3d1d64df-1ae7-4008-94f0-f4eee9457f87] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1121 23:54:21.565931   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
2025/11/21 23:54:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "nginx-svc" [3d1d64df-1ae7-4008-94f0-f4eee9457f87] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 22.003568292s
I1121 23:54:43.406202   14530 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-383183 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.206.131 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-383183 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-383183
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-383183
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-383183
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (112.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1121 23:55:23.008713   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:56:44.932078   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-333264 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m51.838555472s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (112.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-333264 kubectl -- rollout status deployment/busybox: (3.314764167s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-jxq66 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-wc77h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-z8p9k -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-jxq66 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-wc77h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-z8p9k -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-jxq66 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-wc77h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-z8p9k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-jxq66 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-jxq66 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-wc77h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-wc77h -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-z8p9k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 kubectl -- exec busybox-7b57f96db7-z8p9k -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-333264 node add --alsologtostderr -v 5: (23.131184484s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-333264 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp testdata/cp-test.txt ha-333264:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2153553729/001/cp-test_ha-333264.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264:/home/docker/cp-test.txt ha-333264-m02:/home/docker/cp-test_ha-333264_ha-333264-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m02 "sudo cat /home/docker/cp-test_ha-333264_ha-333264-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264:/home/docker/cp-test.txt ha-333264-m03:/home/docker/cp-test_ha-333264_ha-333264-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m03 "sudo cat /home/docker/cp-test_ha-333264_ha-333264-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264:/home/docker/cp-test.txt ha-333264-m04:/home/docker/cp-test_ha-333264_ha-333264-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m04 "sudo cat /home/docker/cp-test_ha-333264_ha-333264-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp testdata/cp-test.txt ha-333264-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2153553729/001/cp-test_ha-333264-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264-m02:/home/docker/cp-test.txt ha-333264:/home/docker/cp-test_ha-333264-m02_ha-333264.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264 "sudo cat /home/docker/cp-test_ha-333264-m02_ha-333264.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264-m02:/home/docker/cp-test.txt ha-333264-m03:/home/docker/cp-test_ha-333264-m02_ha-333264-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m03 "sudo cat /home/docker/cp-test_ha-333264-m02_ha-333264-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264-m02:/home/docker/cp-test.txt ha-333264-m04:/home/docker/cp-test_ha-333264-m02_ha-333264-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m04 "sudo cat /home/docker/cp-test_ha-333264-m02_ha-333264-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp testdata/cp-test.txt ha-333264-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2153553729/001/cp-test_ha-333264-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264-m03:/home/docker/cp-test.txt ha-333264:/home/docker/cp-test_ha-333264-m03_ha-333264.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264 "sudo cat /home/docker/cp-test_ha-333264-m03_ha-333264.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264-m03:/home/docker/cp-test.txt ha-333264-m02:/home/docker/cp-test_ha-333264-m03_ha-333264-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m02 "sudo cat /home/docker/cp-test_ha-333264-m03_ha-333264-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264-m03:/home/docker/cp-test.txt ha-333264-m04:/home/docker/cp-test_ha-333264-m03_ha-333264-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m04 "sudo cat /home/docker/cp-test_ha-333264-m03_ha-333264-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp testdata/cp-test.txt ha-333264-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2153553729/001/cp-test_ha-333264-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264-m04:/home/docker/cp-test.txt ha-333264:/home/docker/cp-test_ha-333264-m04_ha-333264.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264 "sudo cat /home/docker/cp-test_ha-333264-m04_ha-333264.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264-m04:/home/docker/cp-test.txt ha-333264-m02:/home/docker/cp-test_ha-333264-m04_ha-333264-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m02 "sudo cat /home/docker/cp-test_ha-333264-m04_ha-333264-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 cp ha-333264-m04:/home/docker/cp-test.txt ha-333264-m03:/home/docker/cp-test_ha-333264-m04_ha-333264-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 ssh -n ha-333264-m03 "sudo cat /home/docker/cp-test_ha-333264-m04_ha-333264-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-333264 node stop m02 --alsologtostderr -v 5: (12.056702824s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-333264 status --alsologtostderr -v 5: exit status 7 (703.712236ms)

                                                
                                                
-- stdout --
	ha-333264
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-333264-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-333264-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-333264-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:57:54.871938   83539 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:57:54.872207   83539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:57:54.872218   83539 out.go:374] Setting ErrFile to fd 2...
	I1121 23:57:54.872222   83539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:57:54.872457   83539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1121 23:57:54.872671   83539 out.go:368] Setting JSON to false
	I1121 23:57:54.872702   83539 mustload.go:66] Loading cluster: ha-333264
	I1121 23:57:54.872819   83539 notify.go:221] Checking for updates...
	I1121 23:57:54.873192   83539 config.go:182] Loaded profile config "ha-333264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 23:57:54.873214   83539 status.go:174] checking status of ha-333264 ...
	I1121 23:57:54.873722   83539 cli_runner.go:164] Run: docker container inspect ha-333264 --format={{.State.Status}}
	I1121 23:57:54.894578   83539 status.go:371] ha-333264 host status = "Running" (err=<nil>)
	I1121 23:57:54.894607   83539 host.go:66] Checking if "ha-333264" exists ...
	I1121 23:57:54.894916   83539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-333264
	I1121 23:57:54.916627   83539 host.go:66] Checking if "ha-333264" exists ...
	I1121 23:57:54.916949   83539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 23:57:54.917004   83539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-333264
	I1121 23:57:54.935680   83539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32789 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/ha-333264/id_rsa Username:docker}
	I1121 23:57:55.024802   83539 ssh_runner.go:195] Run: systemctl --version
	I1121 23:57:55.032021   83539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 23:57:55.045667   83539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:57:55.109104   83539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-21 23:57:55.098943749 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1121 23:57:55.110043   83539 kubeconfig.go:125] found "ha-333264" server: "https://192.168.49.254:8443"
	I1121 23:57:55.110086   83539 api_server.go:166] Checking apiserver status ...
	I1121 23:57:55.110151   83539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:57:55.123137   83539 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1391/cgroup
	W1121 23:57:55.132145   83539 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1391/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1121 23:57:55.132212   83539 ssh_runner.go:195] Run: ls
	I1121 23:57:55.136127   83539 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1121 23:57:55.142068   83539 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1121 23:57:55.142094   83539 status.go:463] ha-333264 apiserver status = Running (err=<nil>)
	I1121 23:57:55.142103   83539 status.go:176] ha-333264 status: &{Name:ha-333264 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 23:57:55.142119   83539 status.go:174] checking status of ha-333264-m02 ...
	I1121 23:57:55.142407   83539 cli_runner.go:164] Run: docker container inspect ha-333264-m02 --format={{.State.Status}}
	I1121 23:57:55.162351   83539 status.go:371] ha-333264-m02 host status = "Stopped" (err=<nil>)
	I1121 23:57:55.162373   83539 status.go:384] host is not running, skipping remaining checks
	I1121 23:57:55.162380   83539 status.go:176] ha-333264-m02 status: &{Name:ha-333264-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 23:57:55.162402   83539 status.go:174] checking status of ha-333264-m03 ...
	I1121 23:57:55.162649   83539 cli_runner.go:164] Run: docker container inspect ha-333264-m03 --format={{.State.Status}}
	I1121 23:57:55.180987   83539 status.go:371] ha-333264-m03 host status = "Running" (err=<nil>)
	I1121 23:57:55.181017   83539 host.go:66] Checking if "ha-333264-m03" exists ...
	I1121 23:57:55.181288   83539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-333264-m03
	I1121 23:57:55.200219   83539 host.go:66] Checking if "ha-333264-m03" exists ...
	I1121 23:57:55.200523   83539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 23:57:55.200581   83539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-333264-m03
	I1121 23:57:55.219032   83539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32799 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/ha-333264-m03/id_rsa Username:docker}
	I1121 23:57:55.307538   83539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 23:57:55.320635   83539 kubeconfig.go:125] found "ha-333264" server: "https://192.168.49.254:8443"
	I1121 23:57:55.320663   83539 api_server.go:166] Checking apiserver status ...
	I1121 23:57:55.320704   83539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:57:55.332620   83539 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1297/cgroup
	W1121 23:57:55.342068   83539 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1297/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1121 23:57:55.342117   83539 ssh_runner.go:195] Run: ls
	I1121 23:57:55.345807   83539 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1121 23:57:55.349822   83539 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1121 23:57:55.349846   83539 status.go:463] ha-333264-m03 apiserver status = Running (err=<nil>)
	I1121 23:57:55.349856   83539 status.go:176] ha-333264-m03 status: &{Name:ha-333264-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1121 23:57:55.349876   83539 status.go:174] checking status of ha-333264-m04 ...
	I1121 23:57:55.350152   83539 cli_runner.go:164] Run: docker container inspect ha-333264-m04 --format={{.State.Status}}
	I1121 23:57:55.368807   83539 status.go:371] ha-333264-m04 host status = "Running" (err=<nil>)
	I1121 23:57:55.368830   83539 host.go:66] Checking if "ha-333264-m04" exists ...
	I1121 23:57:55.369099   83539 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-333264-m04
	I1121 23:57:55.387741   83539 host.go:66] Checking if "ha-333264-m04" exists ...
	I1121 23:57:55.387992   83539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 23:57:55.388035   83539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-333264-m04
	I1121 23:57:55.407597   83539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32804 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/ha-333264-m04/id_rsa Username:docker}
	I1121 23:57:55.498868   83539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 23:57:55.511820   83539 status.go:176] ha-333264-m04 status: &{Name:ha-333264-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-333264 node start m02 --alsologtostderr -v 5: (7.886848857s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-333264 stop --alsologtostderr -v 5: (37.347444755s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 start --wait true --alsologtostderr -v 5
E1121 23:59:01.071061   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:06.952481   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:06.958976   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:06.970361   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:06.991798   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:07.033299   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:07.114738   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:07.276367   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:07.598082   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:08.239582   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:09.521119   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:12.083417   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:17.205705   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:27.447885   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:59:28.774546   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-333264 start --wait true --alsologtostderr -v 5: (59.315039678s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 node delete m03 --alsologtostderr -v 5
E1121 23:59:47.929455   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-333264 node delete m03 --alsologtostderr -v 5: (8.581373229s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 stop --alsologtostderr -v 5
E1122 00:00:28.891398   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-333264 stop --alsologtostderr -v 5: (36.064412753s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-333264 status --alsologtostderr -v 5: exit status 7 (122.953587ms)

                                                
                                                
-- stdout --
	ha-333264
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-333264-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-333264-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:00:28.975081   99857 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:00:28.975394   99857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:00:28.975406   99857 out.go:374] Setting ErrFile to fd 2...
	I1122 00:00:28.975410   99857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:00:28.975580   99857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:00:28.975762   99857 out.go:368] Setting JSON to false
	I1122 00:00:28.975791   99857 mustload.go:66] Loading cluster: ha-333264
	I1122 00:00:28.975903   99857 notify.go:221] Checking for updates...
	I1122 00:00:28.976141   99857 config.go:182] Loaded profile config "ha-333264": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:00:28.976155   99857 status.go:174] checking status of ha-333264 ...
	I1122 00:00:28.976638   99857 cli_runner.go:164] Run: docker container inspect ha-333264 --format={{.State.Status}}
	I1122 00:00:28.997001   99857 status.go:371] ha-333264 host status = "Stopped" (err=<nil>)
	I1122 00:00:28.997049   99857 status.go:384] host is not running, skipping remaining checks
	I1122 00:00:28.997067   99857 status.go:176] ha-333264 status: &{Name:ha-333264 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:00:28.997120   99857 status.go:174] checking status of ha-333264-m02 ...
	I1122 00:00:28.997530   99857 cli_runner.go:164] Run: docker container inspect ha-333264-m02 --format={{.State.Status}}
	I1122 00:00:29.018009   99857 status.go:371] ha-333264-m02 host status = "Stopped" (err=<nil>)
	I1122 00:00:29.018035   99857 status.go:384] host is not running, skipping remaining checks
	I1122 00:00:29.018044   99857 status.go:176] ha-333264-m02 status: &{Name:ha-333264-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:00:29.018078   99857 status.go:174] checking status of ha-333264-m04 ...
	I1122 00:00:29.018360   99857 cli_runner.go:164] Run: docker container inspect ha-333264-m04 --format={{.State.Status}}
	I1122 00:00:29.036726   99857 status.go:371] ha-333264-m04 host status = "Stopped" (err=<nil>)
	I1122 00:00:29.036751   99857 status.go:384] host is not running, skipping remaining checks
	I1122 00:00:29.036759   99857 status.go:176] ha-333264-m04 status: &{Name:ha-333264-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-333264 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (54.542628248s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 node add --control-plane --alsologtostderr -v 5
E1122 00:01:50.814878   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-333264 node add --control-plane --alsologtostderr -v 5: (42.989513739s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-333264 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (38.51s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-659746 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-659746 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (38.514011871s)
--- PASS: TestJSONOutput/start/Command (38.51s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-659746 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-659746 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-659746 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-659746 --output=json --user=testUser: (5.86036461s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-398299 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-398299 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (82.372723ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"320db0df-fb02-4be4-aba3-57d1cb3fc01e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-398299] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc2bfa54-5f8e-4ad2-b303-cf1a4f86d695","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21934"}}
	{"specversion":"1.0","id":"f10ba017-6683-48fb-8413-e352f5734de3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3eab1f09-7a48-4c13-adcb-dc458b2b2f11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig"}}
	{"specversion":"1.0","id":"8de33580-b53b-425f-9d00-ba431e365a61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube"}}
	{"specversion":"1.0","id":"d44b7773-7acc-4200-a7f2-6423ceed1281","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d1b0dd4c-6e49-4dd1-97a1-96af32d0dd14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"661fc5bb-58a4-42e7-a4ac-eba2f0b07df3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-398299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-398299
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-248710 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-248710 --network=: (36.38907412s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-248710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-248710
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-248710: (2.185346595s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-690807 --network=bridge
E1122 00:04:01.071206   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:04:06.953077   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-690807 --network=bridge: (23.688579935s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-690807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-690807
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-690807: (2.042550932s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.75s)

                                                
                                    
x
+
TestKicExistingNetwork (24.15s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1122 00:04:12.234975   14530 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1122 00:04:12.252827   14530 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1122 00:04:12.252894   14530 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1122 00:04:12.252921   14530 cli_runner.go:164] Run: docker network inspect existing-network
W1122 00:04:12.271413   14530 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1122 00:04:12.271447   14530 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1122 00:04:12.271477   14530 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1122 00:04:12.271636   14530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1122 00:04:12.291401   14530 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1df6c22ede91 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:5a:c7:f4:a5:24:54} reservation:<nil>}
I1122 00:04:12.291814   14530 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003f83e0}
I1122 00:04:12.291846   14530 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1122 00:04:12.291898   14530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1122 00:04:12.341236   14530 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-274407 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-274407 --network=existing-network: (21.967463375s)
helpers_test.go:175: Cleaning up "existing-network-274407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-274407
E1122 00:04:34.657102   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-274407: (2.039768655s)
I1122 00:04:36.367190   14530 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.15s)

                                                
                                    
x
+
TestKicCustomSubnet (27.18s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-391332 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-391332 --subnet=192.168.60.0/24: (25.013817782s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-391332 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-391332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-391332
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-391332: (2.150140091s)
--- PASS: TestKicCustomSubnet (27.18s)

                                                
                                    
x
+
TestKicStaticIP (26.71s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-936519 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-936519 --static-ip=192.168.200.200: (24.394407485s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-936519 ip
helpers_test.go:175: Cleaning up "static-ip-936519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-936519
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-936519: (2.160010788s)
--- PASS: TestKicStaticIP (26.71s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.87s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-230697 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-230697 --driver=docker  --container-runtime=containerd: (22.986480017s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-232998 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-232998 --driver=docker  --container-runtime=containerd: (20.315657127s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-230697
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-232998
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-232998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-232998
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-232998: (1.944930284s)
helpers_test.go:175: Cleaning up "first-230697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-230697
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-230697: (2.3897395s)
--- PASS: TestMinikubeProfile (48.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-760211 --memory=3072 --mount-string /tmp/TestMountStartserial740353802/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-760211 --memory=3072 --mount-string /tmp/TestMountStartserial740353802/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.357648382s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-760211 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-785450 --memory=3072 --mount-string /tmp/TestMountStartserial740353802/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-785450 --memory=3072 --mount-string /tmp/TestMountStartserial740353802/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.495990603s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-785450 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-760211 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-760211 --alsologtostderr -v=5: (1.68535052s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-785450 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-785450
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-785450: (1.26891985s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.57s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-785450
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-785450: (6.56805601s)
--- PASS: TestMountStart/serial/RestartStopped (7.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-785450 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-716494 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-716494 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m5.920400842s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-716494 -- rollout status deployment/busybox: (2.949790741s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- exec busybox-7b57f96db7-hc2fr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- exec busybox-7b57f96db7-nrgx2 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- exec busybox-7b57f96db7-hc2fr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- exec busybox-7b57f96db7-nrgx2 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- exec busybox-7b57f96db7-hc2fr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- exec busybox-7b57f96db7-nrgx2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- exec busybox-7b57f96db7-hc2fr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- exec busybox-7b57f96db7-hc2fr -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- exec busybox-7b57f96db7-nrgx2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-716494 -- exec busybox-7b57f96db7-nrgx2 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-716494 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-716494 -v=5 --alsologtostderr: (25.710504045s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-716494 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 cp testdata/cp-test.txt multinode-716494:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 cp multinode-716494:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile399032965/001/cp-test_multinode-716494.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 cp multinode-716494:/home/docker/cp-test.txt multinode-716494-m02:/home/docker/cp-test_multinode-716494_multinode-716494-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494-m02 "sudo cat /home/docker/cp-test_multinode-716494_multinode-716494-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 cp multinode-716494:/home/docker/cp-test.txt multinode-716494-m03:/home/docker/cp-test_multinode-716494_multinode-716494-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494-m03 "sudo cat /home/docker/cp-test_multinode-716494_multinode-716494-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 cp testdata/cp-test.txt multinode-716494-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 cp multinode-716494-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile399032965/001/cp-test_multinode-716494-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 cp multinode-716494-m02:/home/docker/cp-test.txt multinode-716494:/home/docker/cp-test_multinode-716494-m02_multinode-716494.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494 "sudo cat /home/docker/cp-test_multinode-716494-m02_multinode-716494.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 cp multinode-716494-m02:/home/docker/cp-test.txt multinode-716494-m03:/home/docker/cp-test_multinode-716494-m02_multinode-716494-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494-m03 "sudo cat /home/docker/cp-test_multinode-716494-m02_multinode-716494-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 cp testdata/cp-test.txt multinode-716494-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 cp multinode-716494-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile399032965/001/cp-test_multinode-716494-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 cp multinode-716494-m03:/home/docker/cp-test.txt multinode-716494:/home/docker/cp-test_multinode-716494-m03_multinode-716494.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494 "sudo cat /home/docker/cp-test_multinode-716494-m03_multinode-716494.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 cp multinode-716494-m03:/home/docker/cp-test.txt multinode-716494-m02:/home/docker/cp-test_multinode-716494-m03_multinode-716494-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 ssh -n multinode-716494-m02 "sudo cat /home/docker/cp-test_multinode-716494-m03_multinode-716494-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-716494 node stop m03: (1.276055052s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-716494 status: exit status 7 (496.485266ms)

                                                
                                                
-- stdout --
	multinode-716494
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-716494-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-716494-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-716494 status --alsologtostderr: exit status 7 (501.864524ms)

                                                
                                                
-- stdout --
	multinode-716494
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-716494-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-716494-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:08:35.073063  161895 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:08:35.073359  161895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:08:35.073369  161895 out.go:374] Setting ErrFile to fd 2...
	I1122 00:08:35.073374  161895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:08:35.073575  161895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:08:35.073725  161895 out.go:368] Setting JSON to false
	I1122 00:08:35.073748  161895 mustload.go:66] Loading cluster: multinode-716494
	I1122 00:08:35.073783  161895 notify.go:221] Checking for updates...
	I1122 00:08:35.074137  161895 config.go:182] Loaded profile config "multinode-716494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:08:35.074152  161895 status.go:174] checking status of multinode-716494 ...
	I1122 00:08:35.074568  161895 cli_runner.go:164] Run: docker container inspect multinode-716494 --format={{.State.Status}}
	I1122 00:08:35.094990  161895 status.go:371] multinode-716494 host status = "Running" (err=<nil>)
	I1122 00:08:35.095014  161895 host.go:66] Checking if "multinode-716494" exists ...
	I1122 00:08:35.095342  161895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-716494
	I1122 00:08:35.114636  161895 host.go:66] Checking if "multinode-716494" exists ...
	I1122 00:08:35.114950  161895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:08:35.114994  161895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-716494
	I1122 00:08:35.135411  161895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/multinode-716494/id_rsa Username:docker}
	I1122 00:08:35.227435  161895 ssh_runner.go:195] Run: systemctl --version
	I1122 00:08:35.234078  161895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:08:35.247435  161895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:08:35.305375  161895 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-22 00:08:35.294145001 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:08:35.305920  161895 kubeconfig.go:125] found "multinode-716494" server: "https://192.168.67.2:8443"
	I1122 00:08:35.305956  161895 api_server.go:166] Checking apiserver status ...
	I1122 00:08:35.306010  161895 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:08:35.318880  161895 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1297/cgroup
	W1122 00:08:35.327452  161895 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1297/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:08:35.327509  161895 ssh_runner.go:195] Run: ls
	I1122 00:08:35.331388  161895 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1122 00:08:35.336431  161895 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1122 00:08:35.336457  161895 status.go:463] multinode-716494 apiserver status = Running (err=<nil>)
	I1122 00:08:35.336480  161895 status.go:176] multinode-716494 status: &{Name:multinode-716494 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:08:35.336497  161895 status.go:174] checking status of multinode-716494-m02 ...
	I1122 00:08:35.336729  161895 cli_runner.go:164] Run: docker container inspect multinode-716494-m02 --format={{.State.Status}}
	I1122 00:08:35.356189  161895 status.go:371] multinode-716494-m02 host status = "Running" (err=<nil>)
	I1122 00:08:35.356218  161895 host.go:66] Checking if "multinode-716494-m02" exists ...
	I1122 00:08:35.356506  161895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-716494-m02
	I1122 00:08:35.375773  161895 host.go:66] Checking if "multinode-716494-m02" exists ...
	I1122 00:08:35.376100  161895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:08:35.376153  161895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-716494-m02
	I1122 00:08:35.394018  161895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/21934-9059/.minikube/machines/multinode-716494-m02/id_rsa Username:docker}
	I1122 00:08:35.482520  161895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:08:35.495105  161895 status.go:176] multinode-716494-m02 status: &{Name:multinode-716494-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:08:35.495153  161895 status.go:174] checking status of multinode-716494-m03 ...
	I1122 00:08:35.495440  161895 cli_runner.go:164] Run: docker container inspect multinode-716494-m03 --format={{.State.Status}}
	I1122 00:08:35.513593  161895 status.go:371] multinode-716494-m03 host status = "Stopped" (err=<nil>)
	I1122 00:08:35.513616  161895 status.go:384] host is not running, skipping remaining checks
	I1122 00:08:35.513622  161895 status.go:176] multinode-716494-m03 status: &{Name:multinode-716494-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (6.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-716494 node start m03 -v=5 --alsologtostderr: (6.165107496s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (6.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (69.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-716494
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-716494
E1122 00:09:01.072617   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:09:06.954417   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-716494: (25.034074674s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-716494 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-716494 --wait=true -v=5 --alsologtostderr: (44.349062078s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-716494
--- PASS: TestMultiNode/serial/RestartKeepsNodes (69.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-716494 node delete m03: (4.654336249s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-716494 stop: (23.895313064s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-716494 status: exit status 7 (106.428993ms)

                                                
                                                
-- stdout --
	multinode-716494
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-716494-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-716494 status --alsologtostderr: exit status 7 (102.327594ms)

                                                
                                                
-- stdout --
	multinode-716494
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-716494-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:10:21.204052  171574 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:10:21.204352  171574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:10:21.204363  171574 out.go:374] Setting ErrFile to fd 2...
	I1122 00:10:21.204367  171574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:10:21.204578  171574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:10:21.204743  171574 out.go:368] Setting JSON to false
	I1122 00:10:21.204771  171574 mustload.go:66] Loading cluster: multinode-716494
	I1122 00:10:21.204901  171574 notify.go:221] Checking for updates...
	I1122 00:10:21.205113  171574 config.go:182] Loaded profile config "multinode-716494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:10:21.205127  171574 status.go:174] checking status of multinode-716494 ...
	I1122 00:10:21.205586  171574 cli_runner.go:164] Run: docker container inspect multinode-716494 --format={{.State.Status}}
	I1122 00:10:21.226341  171574 status.go:371] multinode-716494 host status = "Stopped" (err=<nil>)
	I1122 00:10:21.226385  171574 status.go:384] host is not running, skipping remaining checks
	I1122 00:10:21.226394  171574 status.go:176] multinode-716494 status: &{Name:multinode-716494 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:10:21.226422  171574 status.go:174] checking status of multinode-716494-m02 ...
	I1122 00:10:21.226669  171574 cli_runner.go:164] Run: docker container inspect multinode-716494-m02 --format={{.State.Status}}
	I1122 00:10:21.245106  171574 status.go:371] multinode-716494-m02 host status = "Stopped" (err=<nil>)
	I1122 00:10:21.245128  171574 status.go:384] host is not running, skipping remaining checks
	I1122 00:10:21.245135  171574 status.go:176] multinode-716494-m02 status: &{Name:multinode-716494-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-716494 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1122 00:10:24.138077   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-716494 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (45.199229587s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-716494 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-716494
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-716494-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-716494-m02 --driver=docker  --container-runtime=containerd: exit status 14 (84.102769ms)

                                                
                                                
-- stdout --
	* [multinode-716494-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-716494-m02' is duplicated with machine name 'multinode-716494-m02' in profile 'multinode-716494'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-716494-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-716494-m03 --driver=docker  --container-runtime=containerd: (22.183267728s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-716494
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-716494: exit status 80 (294.676153ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-716494 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-716494-m03 already exists in multinode-716494-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-716494-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-716494-m03: (2.38877106s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.01s)

                                                
                                    
x
+
TestPreload (112.49s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-878214 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-878214 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (49.452365789s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-878214 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-878214 image pull gcr.io/k8s-minikube/busybox: (2.857400549s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-878214
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-878214: (5.728567755s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-878214 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-878214 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (51.735426285s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-878214 image list
helpers_test.go:175: Cleaning up "test-preload-878214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-878214
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-878214: (2.48311141s)
--- PASS: TestPreload (112.49s)

                                                
                                    
x
+
TestScheduledStopUnix (94.43s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-927283 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-927283 --memory=3072 --driver=docker  --container-runtime=containerd: (18.715253967s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-927283 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:13:47.551112  189709 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:13:47.551456  189709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:13:47.551468  189709 out.go:374] Setting ErrFile to fd 2...
	I1122 00:13:47.551472  189709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:13:47.551650  189709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:13:47.551897  189709 out.go:368] Setting JSON to false
	I1122 00:13:47.551999  189709 mustload.go:66] Loading cluster: scheduled-stop-927283
	I1122 00:13:47.552323  189709 config.go:182] Loaded profile config "scheduled-stop-927283": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:13:47.552400  189709 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/config.json ...
	I1122 00:13:47.552585  189709 mustload.go:66] Loading cluster: scheduled-stop-927283
	I1122 00:13:47.552678  189709 config.go:182] Loaded profile config "scheduled-stop-927283": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-927283 -n scheduled-stop-927283
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-927283 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:13:47.943057  189877 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:13:47.943346  189877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:13:47.943358  189877 out.go:374] Setting ErrFile to fd 2...
	I1122 00:13:47.943363  189877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:13:47.943614  189877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:13:47.943913  189877 out.go:368] Setting JSON to false
	I1122 00:13:47.944133  189877 daemonize_unix.go:73] killing process 189759 as it is an old scheduled stop
	I1122 00:13:47.944251  189877 mustload.go:66] Loading cluster: scheduled-stop-927283
	I1122 00:13:47.944637  189877 config.go:182] Loaded profile config "scheduled-stop-927283": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:13:47.944717  189877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/config.json ...
	I1122 00:13:47.944910  189877 mustload.go:66] Loading cluster: scheduled-stop-927283
	I1122 00:13:47.945042  189877 config.go:182] Loaded profile config "scheduled-stop-927283": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1122 00:13:47.949749   14530 retry.go:31] will retry after 98.373µs: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.950929   14530 retry.go:31] will retry after 143.562µs: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.952086   14530 retry.go:31] will retry after 116.655µs: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.953244   14530 retry.go:31] will retry after 299.691µs: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.954397   14530 retry.go:31] will retry after 573.193µs: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.955529   14530 retry.go:31] will retry after 996.031µs: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.956673   14530 retry.go:31] will retry after 739.692µs: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.957838   14530 retry.go:31] will retry after 1.284945ms: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.960063   14530 retry.go:31] will retry after 3.574832ms: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.964313   14530 retry.go:31] will retry after 3.407913ms: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.968552   14530 retry.go:31] will retry after 6.304578ms: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.975921   14530 retry.go:31] will retry after 8.929264ms: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.985244   14530 retry.go:31] will retry after 10.947737ms: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:47.996574   14530 retry.go:31] will retry after 21.230067ms: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:48.018854   14530 retry.go:31] will retry after 36.710168ms: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
I1122 00:13:48.056182   14530 retry.go:31] will retry after 32.86478ms: open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-927283 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1122 00:14:01.072207   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:14:06.957519   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-927283 -n scheduled-stop-927283
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-927283
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-927283 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:14:13.848433  190752 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:14:13.848558  190752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:14:13.848565  190752 out.go:374] Setting ErrFile to fd 2...
	I1122 00:14:13.848577  190752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:14:13.848796  190752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:14:13.849054  190752 out.go:368] Setting JSON to false
	I1122 00:14:13.849151  190752 mustload.go:66] Loading cluster: scheduled-stop-927283
	I1122 00:14:13.849530  190752 config.go:182] Loaded profile config "scheduled-stop-927283": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:14:13.849614  190752 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/scheduled-stop-927283/config.json ...
	I1122 00:14:13.849827  190752 mustload.go:66] Loading cluster: scheduled-stop-927283
	I1122 00:14:13.849961  190752 config.go:182] Loaded profile config "scheduled-stop-927283": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-927283
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-927283: exit status 7 (82.595784ms)

                                                
                                                
-- stdout --
	scheduled-stop-927283
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-927283 -n scheduled-stop-927283
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-927283 -n scheduled-stop-927283: exit status 7 (80.181259ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-927283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-927283
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-927283: (4.180437716s)
--- PASS: TestScheduledStopUnix (94.43s)

                                                
                                    
x
+
TestInsufficientStorage (12.08s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-525275 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-525275 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (9.535882109s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b56f4af1-0ca6-4e60-a95a-edc75c2dbae3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-525275] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"300eaec6-15b9-484f-9327-5aef6cf336b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21934"}}
	{"specversion":"1.0","id":"81f88ebd-666d-425d-bdef-6497e5669207","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ea5bb312-607b-4cb6-a634-d050c48752d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig"}}
	{"specversion":"1.0","id":"42932261-12dd-4c1a-ae5f-c3b0df03dd8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube"}}
	{"specversion":"1.0","id":"d25f9bae-d525-4d66-869a-368182b26ac0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"898ccc52-2588-44af-a1dd-08c715c99813","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"720011cd-ceb4-42f0-b8b8-af5d1ab87fa5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4af05cdc-3943-4dbe-a9ae-ebf42c770eb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c21b3b5e-8db1-48a2-b4da-3f90c126aa4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d28ea90-cb1a-493f-94bc-bf276a63986b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ae724491-94f2-478f-950e-446d2d2f36c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-525275\" primary control-plane node in \"insufficient-storage-525275\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e41c6e4-9b55-46ec-8e73-12a5d98a6b5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763588073-21934 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7941c39-8776-4e51-854f-db3f40c670b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9241525-f016-43fd-8b73-a057341301e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-525275 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-525275 --output=json --layout=cluster: exit status 7 (310.808561ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-525275","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-525275","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 00:15:13.035166  193025 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-525275" does not appear in /home/jenkins/minikube-integration/21934-9059/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-525275 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-525275 --output=json --layout=cluster: exit status 7 (299.732176ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-525275","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-525275","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 00:15:13.335741  193132 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-525275" does not appear in /home/jenkins/minikube-integration/21934-9059/kubeconfig
	E1122 00:15:13.346835  193132 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/insufficient-storage-525275/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-525275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-525275
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-525275: (1.927459067s)
--- PASS: TestInsufficientStorage (12.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (46.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3914243137 start -p running-upgrade-415297 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3914243137 start -p running-upgrade-415297 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (20.527153842s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-415297 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-415297 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.959464056s)
helpers_test.go:175: Cleaning up "running-upgrade-415297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-415297
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-415297: (2.099011201s)
--- PASS: TestRunningBinaryUpgrade (46.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (337.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-882262 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-882262 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.773673532s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-882262
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-882262: (4.458066335s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-882262 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-882262 status --format={{.Host}}: exit status 7 (98.194011ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-882262 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-882262 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m44.17534006s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-882262 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-882262 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-882262 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (95.804018ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-882262] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-882262
	    minikube start -p kubernetes-upgrade-882262 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8822622 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-882262 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-882262 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-882262 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (18.978974942s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-882262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-882262
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-882262: (3.196943822s)
--- PASS: TestKubernetesUpgrade (337.84s)

                                                
                                    
x
+
TestMissingContainerUpgrade (137.95s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1169866055 start -p missing-upgrade-090860 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1169866055 start -p missing-upgrade-090860 --memory=3072 --driver=docker  --container-runtime=containerd: (1m8.307574355s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-090860
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-090860
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-090860 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-090860 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.29892321s)
helpers_test.go:175: Cleaning up "missing-upgrade-090860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-090860
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-090860: (2.035582066s)
--- PASS: TestMissingContainerUpgrade (137.95s)

                                                
                                    
x
+
TestPause/serial/Start (52.28s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-812988 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-812988 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (52.275714438s)
--- PASS: TestPause/serial/Start (52.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-812988 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-812988 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.815120257s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (114.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.4159627121 start -p stopped-upgrade-385150 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.4159627121 start -p stopped-upgrade-385150 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (1m18.998429005s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.4159627121 -p stopped-upgrade-385150 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.4159627121 -p stopped-upgrade-385150 stop: (11.766001659s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-385150 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-385150 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.770914786s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (114.54s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-812988 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-812988 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-812988 --output=json --layout=cluster: exit status 2 (366.058974ms)

                                                
                                                
-- stdout --
	{"Name":"pause-812988","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-812988","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-812988 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.67s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.97s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-812988 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.97s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-812988 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-812988 --alsologtostderr -v=5: (2.972892632s)
--- PASS: TestPause/serial/DeletePaused (2.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-812988
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-812988: exit status 1 (20.04ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-812988: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-385150
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-385150: (1.166369543s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-714059 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-714059 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (88.553678ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-714059] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (22.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-714059 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-714059 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.462900958s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-714059 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (22.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-714059 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-714059 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (4.031447529s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-714059 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-714059 status -o json: exit status 2 (320.508444ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-714059","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-714059
E1122 00:19:06.952851   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-714059: (2.061619937s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (6.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-687868 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-687868 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (183.068699ms)

                                                
                                                
-- stdout --
	* [false-687868] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:19:02.040777  242867 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:19:02.041116  242867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:19:02.041132  242867 out.go:374] Setting ErrFile to fd 2...
	I1122 00:19:02.041139  242867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:19:02.041519  242867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-9059/.minikube/bin
	I1122 00:19:02.042171  242867 out.go:368] Setting JSON to false
	I1122 00:19:02.043611  242867 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-6","uptime":3681,"bootTime":1763767061,"procs":351,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1122 00:19:02.043697  242867 start.go:143] virtualization: kvm guest
	I1122 00:19:02.045766  242867 out.go:179] * [false-687868] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1122 00:19:02.047194  242867 notify.go:221] Checking for updates...
	I1122 00:19:02.047227  242867 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:19:02.048629  242867 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:19:02.049924  242867 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-9059/kubeconfig
	I1122 00:19:02.051073  242867 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-9059/.minikube
	I1122 00:19:02.052242  242867 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1122 00:19:02.053542  242867 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:19:02.055509  242867 config.go:182] Loaded profile config "NoKubernetes-714059": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1122 00:19:02.055641  242867 config.go:182] Loaded profile config "cert-expiration-427330": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:19:02.055779  242867 config.go:182] Loaded profile config "kubernetes-upgrade-882262": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:19:02.055919  242867 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:19:02.085013  242867 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1122 00:19:02.085143  242867 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:19:02.149085  242867 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-22 00:19:02.137666717 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-6 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1122 00:19:02.149219  242867 docker.go:319] overlay module found
	I1122 00:19:02.151620  242867 out.go:179] * Using the docker driver based on user configuration
	I1122 00:19:02.152750  242867 start.go:309] selected driver: docker
	I1122 00:19:02.152774  242867 start.go:930] validating driver "docker" against <nil>
	I1122 00:19:02.152788  242867 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:19:02.154490  242867 out.go:203] 
	W1122 00:19:02.155572  242867 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1122 00:19:02.156478  242867 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-687868 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-687868

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-687868

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-687868

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-687868

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-687868

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-687868

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-687868

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-687868

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-687868

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-687868

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-687868

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-687868" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-687868" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:19:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-714059
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:16:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-427330
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:17:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-882262
contexts:
- context:
cluster: NoKubernetes-714059
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:19:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-714059
name: NoKubernetes-714059
- context:
cluster: cert-expiration-427330
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:16:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-427330
name: cert-expiration-427330
- context:
cluster: kubernetes-upgrade-882262
user: kubernetes-upgrade-882262
name: kubernetes-upgrade-882262
current-context: NoKubernetes-714059
kind: Config
users:
- name: NoKubernetes-714059
user:
client-certificate: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/NoKubernetes-714059/client.crt
client-key: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/NoKubernetes-714059/client.key
- name: cert-expiration-427330
user:
client-certificate: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/cert-expiration-427330/client.crt
client-key: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/cert-expiration-427330/client.key
- name: kubernetes-upgrade-882262
user:
client-certificate: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/kubernetes-upgrade-882262/client.crt
client-key: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/kubernetes-upgrade-882262/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-687868

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-687868"

                                                
                                                
----------------------- debugLogs end: false-687868 [took: 3.587365168s] --------------------------------
helpers_test.go:175: Cleaning up "false-687868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-687868
--- PASS: TestNetworkPlugins/group/false (3.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-714059 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-714059 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (4.117374014s)
--- PASS: TestNoKubernetes/serial/Start (4.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (54.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-462319 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-462319 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (54.250833126s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (54.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21934-9059/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-714059 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-714059 "sudo systemctl is-active --quiet service kubelet": exit status 1 (291.86105ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (38.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (35.91665907s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.533232613s)
--- PASS: TestNoKubernetes/serial/ProfileList (38.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-781232 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-781232 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (51.669804628s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-714059
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-714059: (1.340062571s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-714059 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-714059 --driver=docker  --container-runtime=containerd: (6.996796854s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-714059 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-714059 "sudo systemctl is-active --quiet service kubelet": exit status 1 (304.580019ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-491677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-491677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (41.732562341s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-462319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-462319 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-462319 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-462319 --alsologtostderr -v=3: (12.200889397s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-781232 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-781232 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-781232 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-781232 --alsologtostderr -v=3: (12.212864496s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-462319 -n old-k8s-version-462319
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-462319 -n old-k8s-version-462319: exit status 7 (83.747659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-462319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-462319 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-462319 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (43.806612067s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-462319 -n old-k8s-version-462319
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-781232 -n no-preload-781232
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-781232 -n no-preload-781232: exit status 7 (90.148388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-781232 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-781232 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-781232 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (48.82228229s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-781232 -n no-preload-781232
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-491677 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-491677 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-491677 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-491677 --alsologtostderr -v=3: (12.301770695s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-491677 -n embed-certs-491677
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-491677 -n embed-certs-491677: exit status 7 (93.597651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-491677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-491677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-491677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (48.077636419s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-491677 -n embed-certs-491677
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-hskht" [a236db3c-111c-4afc-b0e3-0a39a486780c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003956808s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-hskht" [a236db3c-111c-4afc-b0e3-0a39a486780c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004048102s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-462319 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-462319 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-462319 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-462319 -n old-k8s-version-462319
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-462319 -n old-k8s-version-462319: exit status 2 (357.386179ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-462319 -n old-k8s-version-462319
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-462319 -n old-k8s-version-462319: exit status 2 (336.274474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-462319 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-462319 -n old-k8s-version-462319
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-462319 -n old-k8s-version-462319
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b547s" [a73aaf71-7474-431d-9912-7a9597df0af6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004280977s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-418191 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-418191 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m17.453557093s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b547s" [a73aaf71-7474-431d-9912-7a9597df0af6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007315093s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-781232 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-781232 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-781232 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-781232 --alsologtostderr -v=1: (1.177416712s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-781232 -n no-preload-781232
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-781232 -n no-preload-781232: exit status 2 (393.201728ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-781232 -n no-preload-781232
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-781232 -n no-preload-781232: exit status 2 (376.407418ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-781232 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-781232 -n no-preload-781232
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-781232 -n no-preload-781232
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-401244 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-401244 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (30.183491433s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (40.973810284s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4dvfm" [26cb374c-dfd4-466b-a3c6-5b5286772b53] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.083836983s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4dvfm" [26cb374c-dfd4-466b-a3c6-5b5286772b53] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003172381s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-491677 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-491677 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-491677 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-491677 -n embed-certs-491677
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-491677 -n embed-certs-491677: exit status 2 (385.745638ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-491677 -n embed-certs-491677
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-491677 -n embed-certs-491677: exit status 2 (350.504502ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-491677 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-491677 -n embed-certs-491677
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-491677 -n embed-certs-491677
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (44.349992157s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-401244 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-401244 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-401244 --alsologtostderr -v=3: (3.535543186s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-401244 -n newest-cni-401244
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-401244 -n newest-cni-401244: exit status 7 (95.041964ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-401244 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-401244 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-401244 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (11.069403472s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-401244 -n newest-cni-401244
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-401244 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-401244 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-401244 -n newest-cni-401244
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-401244 -n newest-cni-401244: exit status 2 (387.433244ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-401244 -n newest-cni-401244
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-401244 -n newest-cni-401244: exit status 2 (396.793731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-401244 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-401244 -n newest-cni-401244
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-401244 -n newest-cni-401244
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-687868 "pgrep -a kubelet"
I1122 00:22:39.941033   14530 config.go:182] Loaded profile config "auto-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-687868 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jwhgw" [94cf0d5d-4a84-4646-a8fd-0594bcf908b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jwhgw" [94cf0d5d-4a84-4646-a8fd-0594bcf908b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003415876s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (56.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (56.593015342s)
--- PASS: TestNetworkPlugins/group/calico/Start (56.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-687868 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-v55v2" [a2d5e881-4bdc-4d1a-a746-6a95bbbdcb73] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005020199s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-418191 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-418191 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-418191 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-418191 --alsologtostderr -v=3: (12.300117981s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-687868 "pgrep -a kubelet"
I1122 00:23:07.222680   14530 config.go:182] Loaded profile config "kindnet-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-687868 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l542t" [70dbd5c3-f99a-4aba-adc9-ed57a8b768fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l542t" [70dbd5c3-f99a-4aba-adc9-ed57a8b768fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005376038s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (54.897472148s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-687868 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191: exit status 7 (100.26509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-418191 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-418191 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-418191 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.347785854s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-wdd2m" [f286af78-a681-43d0-aefb-ad524519b392] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004988715s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (62.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m2.180278614s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (62.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-687868 "pgrep -a kubelet"
I1122 00:23:44.165813   14530 config.go:182] Loaded profile config "calico-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-687868 replace --force -f testdata/netcat-deployment.yaml
I1122 00:23:44.510625   14530 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1122 00:23:44.531307   14530 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7h6kx" [c27fbfc1-b5ed-4e27-90b4-615aab2c8c41] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7h6kx" [c27fbfc1-b5ed-4e27-90b4-615aab2c8c41] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004636354s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-687868 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-687868 "pgrep -a kubelet"
I1122 00:24:05.650572   14530 config.go:182] Loaded profile config "custom-flannel-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-687868 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9qlh7" [4be631cc-d563-460c-ae09-8b9d1cf84568] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1122 00:24:06.952609   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/functional-383183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-9qlh7" [4be631cc-d563-460c-ae09-8b9d1cf84568] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003836988s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xq8pj" [c0d07a39-27c9-464e-9ce1-31f38df0a7fa] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003637455s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xq8pj" [c0d07a39-27c9-464e-9ce1-31f38df0a7fa] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004504262s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-418191 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-687868 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (46.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (46.861987578s)
--- PASS: TestNetworkPlugins/group/flannel/Start (46.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-418191 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-418191 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-418191 --alsologtostderr -v=1: (1.163648959s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191: exit status 2 (355.401474ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191: exit status 2 (375.773259ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-418191 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-418191 -n default-k8s-diff-port-418191
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-687868 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (43.444092633s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-687868 "pgrep -a kubelet"
I1122 00:24:43.137415   14530 config.go:182] Loaded profile config "enable-default-cni-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-687868 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9j8x8" [47f71165-828c-4bc5-a8b7-bb25c35cf676] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9j8x8" [47f71165-828c-4bc5-a8b7-bb25c35cf676] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.00542188s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-687868 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-4mgxl" [e4e3f319-2230-4ee5-97e8-d54dd8138462] Running
E1122 00:25:04.720112   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/old-k8s-version-462319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:04.726607   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/old-k8s-version-462319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:04.738978   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/old-k8s-version-462319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:04.761272   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/old-k8s-version-462319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:04.802724   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/old-k8s-version-462319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:04.884238   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/old-k8s-version-462319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:05.045586   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/old-k8s-version-462319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:05.367511   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/old-k8s-version-462319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:06.009836   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/old-k8s-version-462319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:07.292041   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/old-k8s-version-462319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004153785s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-687868 "pgrep -a kubelet"
E1122 00:25:09.853564   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/old-k8s-version-462319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1122 00:25:10.052237   14530 config.go:182] Loaded profile config "flannel-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-687868 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8mrzq" [b26a03e3-18bc-4603-b422-ddc9d57142ac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8mrzq" [b26a03e3-18bc-4603-b422-ddc9d57142ac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003987646s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-687868 "pgrep -a kubelet"
I1122 00:25:10.793332   14530 config.go:182] Loaded profile config "bridge-687868": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-687868 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2gxpx" [6d1f3ac0-91f4-429e-8e0d-88a8936413a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2gxpx" [6d1f3ac0-91f4-429e-8e0d-88a8936413a7] Running
E1122 00:25:14.576312   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:14.582794   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:14.594218   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:14.615634   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:14.657094   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:14.738756   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:14.900416   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:14.974890   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/old-k8s-version-462319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:15.221691   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:15.863474   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:25:17.145590   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/no-preload-781232/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004146385s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-687868 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-687868 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-687868 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    

Test skip (26/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-832968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-832968
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
E1122 00:19:01.070567   14530 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/addons-368820/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: kubenet-687868 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-687868

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-687868

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-687868

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-687868

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-687868

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-687868

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-687868

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-687868

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-687868

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-687868

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-687868

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-687868" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-687868" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:16:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-427330
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:17:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-882262
contexts:
- context:
cluster: cert-expiration-427330
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:16:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-427330
name: cert-expiration-427330
- context:
cluster: kubernetes-upgrade-882262
user: kubernetes-upgrade-882262
name: kubernetes-upgrade-882262
current-context: ""
kind: Config
users:
- name: cert-expiration-427330
user:
client-certificate: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/cert-expiration-427330/client.crt
client-key: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/cert-expiration-427330/client.key
- name: kubernetes-upgrade-882262
user:
client-certificate: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/kubernetes-upgrade-882262/client.crt
client-key: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/kubernetes-upgrade-882262/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-687868

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-687868"

                                                
                                                
----------------------- debugLogs end: kubenet-687868 [took: 3.744001867s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-687868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-687868
--- SKIP: TestNetworkPlugins/group/kubenet (3.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-687868 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-687868" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:16:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-427330
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-9059/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:17:06 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-882262
contexts:
- context:
cluster: cert-expiration-427330
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:16:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-427330
name: cert-expiration-427330
- context:
cluster: kubernetes-upgrade-882262
user: kubernetes-upgrade-882262
name: kubernetes-upgrade-882262
current-context: ""
kind: Config
users:
- name: cert-expiration-427330
user:
client-certificate: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/cert-expiration-427330/client.crt
client-key: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/cert-expiration-427330/client.key
- name: kubernetes-upgrade-882262
user:
client-certificate: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/kubernetes-upgrade-882262/client.crt
client-key: /home/jenkins/minikube-integration/21934-9059/.minikube/profiles/kubernetes-upgrade-882262/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-687868

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-687868" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687868"

                                                
                                                
----------------------- debugLogs end: cilium-687868 [took: 4.091700739s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-687868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-687868
--- SKIP: TestNetworkPlugins/group/cilium (4.27s)

                                                
                                    
Copied to clipboard